Restore a Sharded Cluster
On this page
This procedure restores a sharded cluster from an existing backup snapshot, such as LVM snapshots. The source and target sharded cluster must have the same number of shards. For information on creating LVM snapshots for all components of a sharded cluster, see Back Up a Sharded Cluster with File System Snapshots.
Note
mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.
For 4.2+ sharded clusters with in-progress sharded transactions, use one of the following coordinated backup and restore processes which do maintain the atomicity guarantees of transactions across shards:
Considerations
For encrypted storage engines that use AES256-GCM encryption mode, AES256-GCM requires that every process use a unique counter block value with the key.
For encrypted storage engine configured with AES256-GCM cipher:
- Restoring from Hot Backup
- Starting in 4.2, if you restore from files taken via "hot"
backup (i.e. the
mongodis running), MongoDB can detect "dirty" keys on startup and automatically rollover the database key to avoid IV (Initialization Vector) reuse.
- Restoring from Cold Backup
-
However, if you restore from files taken via "cold" backup (i.e. the
mongodis not running), MongoDB cannot detect "dirty" keys on startup, and reuse of IV voids confidentiality and integrity guarantees.Starting in 4.2, to avoid the reuse of the keys after restoring from a cold filesystem snapshot, MongoDB adds a new command-line option
--eseDatabaseKeyRollover. When started with the--eseDatabaseKeyRolloveroption, themongodinstance rolls over the database keys configured withAES256-GCMcipher and exits.
Tip
-
In general, if using filesystem based backups for MongoDB Enterprise 4.2+, use the "hot" backup feature, if possible.
-
For MongoDB Enterprise versions 4.0 and earlier, if you use
AES256-GCMencryption mode, do not make copies of your data files or restore from filesystem snapshots ("hot" or "cold").
A. (Optional) Review Replica Set Configurations
This procedure initiates a new replica set for the Config Server Replica Set (CSRS) and each shard replica set using the default configuration. To use a different replica set configuration for your restored CSRS and shards, you must reconfigure the replica set(s).
If your source cluster is healthy and accessible, connect a mongo shell to the primary replica set member in each replica set and run rs.conf() to view the replica configuration document.
If you cannot access one or more components of the source sharded cluster, please reference any existing internal documentation to reconstruct the configuration requirements for each shard replica set and the config server replica set.
B. Prepare the Target Host for Restoration
- Storage Space Requirements
- Ensure the target host hardware has sufficient open storage space for the restored data. If the target host contains existing sharded cluster data that you want to keep, ensure that you have enough storage space for both the existing data and the restored data.
- LVM Requirements
- For LVM snapshots, you must have at least one LVM managed volume group and an a logical volume with enough free space for the extracted snapshot data.
- MongoDB Version Requirements
-
Ensure the target host and source host have the same MongoDB Server version. To check the version of MongoDB available on a host machine, run
mongod --versionfrom the terminal or shell.For complete documentation on installation, see Install MongoDB.
- Shut Down Running MongoDB Processes
-
If restoring to an existing cluster, shut down the
mongodormongosprocess on the target host.For hosts running
mongos, connect amongoshell to themongosand rundb.shutdownServer()from theadmindatabase:use admin db.shutdownServer()For hosts running a
mongod, connect amongoshell to themongodand rundb.hello():-
If
isWritablePrimaryis false, themongodis a secondary member of a replica set. You can shut it down by runningdb.shutdownServer()from theadmindatabase. -
If
isWritablePrimaryis true, themongodis the primary member of a replica set. Shut down the secondary members of the replica set first. Users.status()to identify the other members of the replica set.The primary automatically steps down after it detects a majority of members are offline. After it steps down (
db.hello()returnsisWritablePrimary: false), you can safely shut down themongod.
-
- Prepare Data Directory
-
Create a directory on the target host for the restored database files. Ensure that the user that runs the
mongodhas read, write, and execute permissions for all files and subfolders in that directory:sudo mkdir /path/to/mongodb sudo chown -R mongodb:mongodb /path/to/mongodb sudo chmod -R 770 /path/to/mongodb
Substitute
/path/to/mongodbwith the path to the data directory you created. On RHEL / CentOS, Amazon Linux, and SUSE, the default username ismongod. - Prepare Log Directory
-
Create a directory on the target host for the
mongodlog files. Ensure that the user that runs themongodhas read, write, and execute permissions for all files and subfolders in that directory:sudo mkdir /path/to/mongodb/logs sudo chown -R mongodb:mongodb /path/to/mongodb/logs sudo chmod -R 770 /path/to/mongodb/logs
Substitute
/path/to/mongodb/logswith the path to the log directory you created. On RHEL / CentOS, Amazon Linux, and SUSE, the default username ismongod. - Create Configuration File
-
This procedure assumes starting a
mongodwith a configuration file.Create the configuration file in your preferred location. Ensure that the user that runs the
mongodhas read and write permissions on the configuration file:sudo touch /path/to/mongod.conf sudo chown mongodb:mongodb /path/to/mongodb/mongod.conf sudo chmod 644 /path/to/mongodb/mongod.conf
On RHEL / CentOS, Amazon Linux, and SUSE, the default username is
mongod.Open the configuration file in your preferred text editor and modify at it as required by your deployment. Alternatively, if you have access to the original configuration file for the
mongod, copy it to your preferred location on the target host.Important
Validate that your configuration file includes the following settings:
-
storage.dbPathmust be set to the path to your preferred data directory. -
systemLog.pathmust be set to the path to your preferred log directory -
net.bindIpmust include the IP address of the host machine. -
replication.replSetNamehas the same value across each member in any given replica set. -
sharding.clusterRolehas the same value across each member in any given replica set.
-
C. Restore Config Server Replica Set
Restore the CSRS primary mongod data files.
Select the tab that corresponds to your preferred backup method:
Drop the local database.
Use db.dropDatabase() to drop the local database:
use local
db.dropDatabase()
Insert the filtered file list into the local database.
This step is only required if you are restoring from a namespace-filtered snapshot.
For each shard, locate the filtered file list with the following name format: <shardRsID>-filteredFileList.txt. This file contains a list of JSON objects with the following format:
{
"filename":"file1",
"ns":"sampleDb1.sampleCollection1",
"uuid": "3b241101-e2bb-4255-8caf-4136c566a962"
}
Add each JSON object from each shard file to a new db.systems.collections_to_restore collection in your local database. You can ignore entries with empty ns or uuid fields. When inserting entries, the uuid field must be inserted as type UUID().
For any planned or completed shard hostname or replica set name changes, update the metadata in config.shards.
You can skip this step if all of the following are true:
-
No shard member host machine hostname has or will change during this procedure.
-
No shard replica set name has or will change during this procedure.
Issue the following find() method on the shards collection in the Config Database. Replace <shardName> with the name of the shard. By default the shard name is its replica set name. If you added the shard using the addShard command and specified a custom name, you must specify that name to <shardName>.
use config db.shards.find( { "_id" : "<shardName>" } )
This operation returns a document that resembles the following:
{
"_id" : "shard1",
"host" : "myShardName/alpha.example.net:27018,beta.example.net:27018,charlie.example.net:27018",
"state" : 1
}
Important
The _id value must match the shardName value in the _id : "shardIdentity" document on the corresponding shard. When restoring the shards later in this procedure, validate that the _id field in shards matches the shardName value on the shard.
Use the updateOne() method to update the hosts string to reflect the planned replica set name and hostname list for the shard. For example, the following operation updates the host connection string for the shard with "_id" : "shard1":
db.shards.updateOne( { "_id" : "shard1" }, { $set : { "host" : "myNewShardName/repl1.example.net:27018,repl2.example.net:27018,repl3.example.net:27018" } } )
Repeat this process until all shard metadata accurately reflects the planned replica set name and hostname list for each shard in the cluster.
Note
If you do not know the shard name, issue the find() method on the shards collection with an empty filter document {}:
use config db.shards.find({})
Each document in the result set represents one shard in the cluster. For each document, check the host field for a connection string that matches the shard in question, i.e. a matching replica set name and member hostname list. Use the _id of that document in place of <shardName>.
Restart the mongod as a new single-node replica set.
Shut down the mongod. Uncomment or add the following configuration file options:
replication replSetName: myNewCSRSName sharding clusterRole: configsvr
If you want to change the replica set name, you must update the replSetName field with the new name before proceeding.
Start the mongod with the updated configuration file:
mongod --config /path/to/mongodb/mongod.conf
If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.
After the mongod starts, connect to it using the mongo shell.
Initiate the new replica set.
Initiate the replica set using rs.initiate() with the default settings.
rs.initiate()
Once the operation completes, use rs.status() to check that the member has become the primary.
Add additional replica set members.
For each replica set member in the CSRS, start the mongod on its host machine. Once you have started up all remaining members of the cluster successfully, connect a mongo shell to the primary replica set member. From the primary, use the rs.add() method to add each member of the replica set. Include the replica set name as the prefix, followed by the hostname and port of the member's mongod process:
rs.add("config2.example.net:27019") rs.add("config3.example.net:27019")
If you want to add the member with specific replica member configuration settings, you can pass a document to rs.add() that defines the member hostname and any members settings your deployment requires.
rs.add( { "host" : "config2.example.net:27019", priority: <int>, votes: <int>, tags: <int> } )
Each new member performs an initial sync to catch up to the primary. Depending on factors such as the amount of data to sync, your network topology and health, and the power of each host machine, initial sync may take an extended period of time to complete.
The replica set may elect a new primary while you add additional members. Use rs.status() to identify which member is the current primary. You can only run rs.add() from the primary.
Configure any additional required replication settings.
The rs.reconfig() method updates the replica set configuration based on a configuration document passed in as a parameter. You must run reconfig() against the primary member of the replica set.
Reference the original configuration file output of the replica set as identified in step A. Review Replica Set Configurations and apply settings as needed.
D. Restore Each Shard Replica Set
Restore the shard primary mongod data files.
Select the tab that corresponds to your preferred backup method:
Create a temporary user with the __system role.
During this procedure you will modify documents in the admin.system.version collection. For clusters enforcing authentication, only the __system role grants permission to modify this collection. You can skip this step if the cluster does not enforce authentication.
Warning
The __system role entitles its holder to take any action against any object in the database. This procedure includes instructions for removing the user created in this step. Do not
keep this user active beyond the scope of this procedure.
Consider creating this user with the clientSource authentication restriction configured such that only the specified hosts can authenticate as the privileged user.
-
Authenticate as a user with the
userAdminrole on theadmindatabase oruserAdminAnyDatabaserole:use admin db.auth("myUserAdmin","mySecurePassword")
-
Create a user with the
__systemrole:db.createUser( { user: "mySystemUser", pwd: "<replaceMeWithAStrongPassword>", roles: [ "__system" ] } )
Passwords should be random, long, and complex to ensure system security and to prevent or delay malicious access.
-
Authenticate as the privileged user:
db.auth("mySystemUser","<replaceMeWithAStrongPassword>")
Drop the local database.
Use db.dropDatabase() to drop the local database:
use local
db.dropDatabase()
Remove the minOpTimeRecovery document from the admin.system.versions collection.
To update the sharding internals, issue the following deleteOne() method on the system.version collection in the admin database:
use admin db.system.version.deleteOne( { _id: "minOpTimeRecovery" } )
Note
The system.version collection is an internal, system collection. You should only modify it when when given specific instructions like these.
Optional: For any CSRS hostname or replica set name changes, update shard metadata in each shard's identity document.
You can skip this step if all of the following are true:
-
The hostnames for any CSRS host did not change during this procedure.
-
The CSRS replica set name did not change during this procedure.
The system.version collection on the admin database contains metadata related to the shard, including the CSRS connection string. If either the CSRS name or any member hostnames changed while restoring the CSRS, you must update this metadata.
Issue the following find() method on the system.version collection in the admin database:
use admin db.system.version.find( {"_id" : "shardIdentity" } )
The find() method returns a document that resembles the following:
{
"_id" : "shardIdentity",
"clusterId" : ObjectId("2bba123c6eeedcd192b19024"),
"shardName" : "shard1",
"configsvrConnectionString" : "myCSRSName/alpha.example.net:27019,beta.example.net:27019,charlie.example.net:27019" }
The following updateOne() method updates the document such that the host string represents the most current CSRS connection string:
db.system.version.updateOne( { "_id" : "shardIdentity" }, { $set : { "configsvrConnectionString" : "myNewCSRSName/config1.example.net:27019,config2.example.net:27019,config3.example.net:27019"} } )
Important
The shardName value must match the _id value in the shards collection on the CSRS. Validate that the metadata on the CSRS match the metadata for the shard. Refer to substep 3 in the C. Restore Config Server Replica Set portion of this procedure for instructions on viewing the CSRS metadata.
Restart the mongod as a new single-node replica set.
Shut down the mongod. Uncomment or add the following configuration file options:
replication replSetName: myNewShardName sharding clusterRole: shardsvr
If you want to change the replica set name, you must update the replSetName field with the new name before proceeding.
Start the mongod with the updated configuration file:
mongod --config /path/to/mongodb/mongod.conf
If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.
After the mongod starts, connect to it using the mongo shell.
Initiate the new replica set.
Initiate the replica set using rs.initiate() with the default settings.
rs.initiate()
Once the operation completes, use rs.status() to check that the member has become the primary.
Add additional replica set members.
For each replica set member in the shard replica set, start the mongod on its host machine. Once you have started up all remaining members of the cluster successfully, connect a mongo shell to the primary replica set member. From the primary, use the rs.add() method to add each member of the replica set. Include the replica set name as the prefix, followed by the hostname and port of the member's mongod process:
rs.add("repl2.example.net:27018") rs.add("repl3.example.net:27018")
If you want to add the member with specific replica member configuration settings, you can pass a document to rs.add() that defines the member hostname and any members settings your deployment requires.
rs.add( { "host" : "repl2.example.net:27018", priority: <int>, votes: <int>, tags: <int> } )
Each new member performs an initial sync to catch up to the primary. Depending on factors such as the amount of data to sync, your network topology and health, and the power of each host machine, initial sync may take an extended period of time to complete.
The replica set may elect a new primary while you add additional members. Use rs.status() to identify which member is the current primary. You can only run rs.add() from the primary.
Configure any additional required replication settings.
The rs.reconfig() method updates the replica set configuration based on a configuration document passed in as a parameter. You must run reconfig() against the primary member of the replica set.
Reference the original configuration file output of the replica set as identified in step A. Review Replica Set Configurations and apply settings as needed.
Remove the temporary privileged user.
For clusters enforcing authentication, remove the privileged user created earlier in this procedure:
-
Authenticate as a user with the
userAdminrole on theadmindatabase oruserAdminAnyDatabaserole:use admin db.auth("myUserAdmin","mySecurePassword")
-
Delete the privileged user:
db.removeUser("mySystemUser")
E. Restart Each mongos
Restart each mongos in the cluster.
mongos --config /path/to/config/mongos.conf
Include all other command line options as required by your deployment.
If the CSRS replica set name or any member hostname changed, update the mongos configuration file setting sharding.configDB with updated configuration server connection string:
sharding: configDB: "myNewCSRSName/config1.example.net:27019,config2.example.net:27019,config3.example.net:27019"
F. Validate Cluster Accessibility
Connect a mongo shell to one of the mongos processes for the cluster. Use sh.status() to check the overall cluster status. If sh.status() indicates that the balancer is not running, use sh.startBalancer() to restart the balancer. [1]
To confirm that all shards are accessible and communicating, insert test data into a temporary sharded collection. Confirm that data is being split and migrated between each shard in your cluster. You can connect a mongo shell to each shard primary and use db.collection.find() to validate that the data was sharded as expected.
| [1] | Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.In MongoDB versions earlier than 6.1, sh.startBalancer() also enables auto-splitting for the sharded cluster. |