Performing a Manual MongoDB Restore Operation for a Sharded Cluster

After a restore-only operation completes on the destination nodes, you can manually start and recover the MongoDB servers and initiate the sharded cluster on the destination nodes.

On the restored clients, you must first bring up the configuration server, followed by the shards.

If you restore files only to the additional secondary members from the Command Center, then perform steps 1 through 7 on the secondary members, before adding them to the replica set or a shard.

Procedure

  1. Copy the config file from the source node or you can use the config file restored in the destination data directory with the name cv_mongod_bkp.conf, and then edit the config file to comment out the authentication security, sharding, and replication parameters.

    If the source shard has encryption enabled, copy the encryption key files to the restored client.

    If mongod is configured to run as a system service, then start it by using the system service manager. For example, complete the following steps to start mongod by using the systemcl service manager:

    1. Identify the config file path used by the mongod.service.

      systemctl cat mongod.service

      Make sure to place the updated config file in that path.

    2. Start the mongod.service using systemctl.

      systemctl start mongod.service
  2. Start the standalone mongod by using the config file from step 1.

    mongod --config <path-to-config-file>
  3. Drop the local database in the mongod server. Connect to the mongod from the mongo shell, and run the following command:

    use local
    db.dropDatabase()
  4. Optional: For the config server mongod, update the shard metadata. If there is a change in the shard name or hostname, update the shards collection in the config database. Connect to the mongod from the mongo shell, and run the following command:

    db.shards.updateOne(
      { "_id" : "<shard name>" },
      { $set : { "host" : "<new shard name>/<new node1 name>:<port>,<new node2 name>:<port>,<new node3 name>: <port>" } }
    )
  5. Optional: For the shard mongod, update the config server connection details. If there is a change in the config server hostname or replica set name, update the config server metadata in the admin database. Connect to the mongod from the mongo shell, and run the following command:

    db.system.version.updateOne(
      { "_id" : "shardIdentity" },
      { $set :
        { "configsvrConnectionString" : "<new config repset name>/<new cfg server1 host name>:<por>,<new cfg server2 hostname>:<port>,<new cfg server3 hostname>:<port>"}
      }
    )
  6. For the shard mongod, drop the minOpTimeRecovery document. Connect to the mongod from the mongo shell, and run the following command:

    use admin
    db.system.version.deleteOne( { _id: "minOpTimeRecovery" } )
  7. Restart the mongod. Connect to the mongod from the mongo shell, and run the following commands:

    1. Shutdown the mongod instance.

      use admin 
      db.shutdownServer()
    2. Uncomment the replication and sharding parameters in the config file.

    3. Start mongod with the updated config file.

      mongod --config <path-to-config-file>
  8. Repeat the steps 1 to 7 on all the restored clients from different shards.

  9. Initiate the single-node replica set. Connect to the mongod from the mongo shell, and run the following command:

    use rs.initiate()

    After completing this step, this node becomes the primary, and each shard and the config server are initiated as a single-node replica sets.

  10. On the primary node, apply the restored Oplog dumps (if any) on the mongod. By default, the restored oplog dumps are present under <Install Path>/iDataAgent/jobResults/CV_JobResults/2/0/<restorejobid>/local folder. Rename the restored dumps oplog_<repsetname>_<timestamp>.bson to oplog.bson one-by-one, and then run the mongorestore command to apply the restored dumps, in the timestamp order.

    mongorestore --port <mongod port#>  --oplogReplay <installpath>/iDataAgent/jobResults/CV_JobResults/2/0/<restorejobid>/local
  11. On the primary node, add additional secondary members (if any) to the replica set.

    rs.add({host:<>, votes:0, priority:0})
  12. Optional: Enable authentication. To enable authentication on the restored nodes, add the authentication security settings to the mongod config files, and then restart the servers.

  13. Start the mongo routing service (mongos) on the required nodes. Copy the config file from the source cluster or generate a new one with the required parameters.

    mongos --config <path-to-mongos-config>
  14. On the primary node, to re-configure any node information such as priority votes and so on, use rs.reconfig().

    For example, to configure the priorities of nodes after a restore operation, connect to the primary node's mongo shell and run the following commands.

    Note

    Wait till the secondaries change to the secondary state.

    cfg = rs.conf();
    cfg.members[2].priority = 1;
    cfg.members[3].priority = 1;
    cfg.members[4].priority = 1;
    rs.reconfig(cfg);

Loading...