Docs HomeMongoDB Manual

Restore a Sharded Cluster恢复分片群集

This procedure restores a sharded cluster from an existing backup snapshot, such as LVM snapshots. 此过程从现有备份快照(如LVM快照)中恢复分片集群。The source and target sharded cluster must have the same number of shards. 源和目标分片集群必须具有相同数量的分片。For information on creating LVM snapshots for all components of a sharded cluster, see Back Up a Sharded Cluster with File System Snapshots.有关为分片集群的所有组件创建LVM快照的信息,请参阅使用文件系统快照备份分片集群

Note

mongodump and mongorestore cannot be part of a backup strategy for 4.2+ sharded clusters that have sharded transactions in progress, as backups created with mongodump do not maintain the atomicity guarantees of transactions across shards.mongodumpmongorestore不能作为4.2+正在进行分片事务的分片集群的备份策略的一部分,因为使用mongodumps创建的备份不能维护跨分片事务原子性的保证。

For 4.2+ sharded clusters with in-progress sharded transactions, use one of the following coordinated backup and restore processes which do maintain the atomicity guarantees of transactions across shards:对于具有正在进行的分片事务的4.2多个分片集群,请使用以下协调的备份和恢复过程之一,这些过程确实维护了跨分片的事务的原子性保证:

Considerations注意事项

For encrypted storage engines that use AES256-GCM encryption mode, AES256-GCM requires that every process use a unique counter block value with the key.对于使用AES256-GCM加密模式的加密存储引擎,AES256-GCM要求每个进程使用唯一的计数器块值和键。

For encrypted storage engine configured with AES256-GCM cipher:对于配置有AES256-GCM密码的加密存储引擎

Restoring from Hot Backup从热备份恢复
Starting in 4.2, if you restore from files taken via "hot" backup (i.e. the mongod is running), MongoDB can detect "dirty" keys on startup and automatically rollover the database key to avoid IV (Initialization Vector) reuse.从4.2开始,如果你从通过“热”备份(即mongod正在运行)获取的文件中恢复,MongoDB可以在启动时检测到“脏”键,并自动滚动数据库键,以避免IV(初始化向量)重用。
Restoring from Cold Backup从冷备份恢复

However, if you restore from files taken via "cold" backup (i.e. the mongod is not running), MongoDB cannot detect "dirty" keys on startup, and reuse of IV voids confidentiality and integrity guarantees.然而,如果您从通过“冷”备份获取的文件中恢复(即mongod没有运行),MongoDB在启动时无法检测到“脏”键,并且IV的重用将失去机密性和完整性保证。

Starting in 4.2, to avoid the reuse of the keys after restoring from a cold filesystem snapshot, MongoDB adds a new command-line option --eseDatabaseKeyRollover. When started with the --eseDatabaseKeyRollover option, the mongod instance rolls over the database keys configured with AES256-GCM cipher and exits.从4.2开始,为了避免从冷文件系统快照恢复后重用键,MongoDB添加了一个新的命令行选项--eseDatabaseKeyRollover。当使用--eseDatabaseKeyRollover选项启动时,mongod实例会滚动使用AES256-GCM密码配置的数据库键并退出。

Tip
  • In general, if using filesystem based backups for MongoDB Enterprise 4.2+, use the "hot" backup feature, if possible.通常,如果MongoDB Enterprise 4.2+使用基于文件系统的备份,请尽可能使用“热”备份功能。
  • For MongoDB Enterprise versions 4.0 and earlier, if you use AES256-GCM encryption mode, do not make copies of your data files or restore from filesystem snapshots ("hot" or "cold").对于MongoDB Enterprise 4.0及更早版本,如果使用AES256-GCM加密模式,请不要复制数据文件或从文件系统快照(“热”或“冷”)进行恢复。

A. (Optional) Review Replica Set Configurations(可选)查看复制副本集配置

This procedure initiates a new replica set for the Config Server Replica Set (CSRS) and each shard replica set using the default configuration. To use a different replica set configuration for your restored CSRS and shards, you must reconfigure the replica set(s).此过程使用默认配置为配置服务器副本集(CSRS)和每个分片副本集启动一个新的副本集。要为恢复的CSRS和分片使用不同的副本集配置,必须重新配置副本集。

If your source cluster is healthy and accessible, connect a mongo shell to the primary replica set member in each replica set and run rs.conf() to view the replica configuration document.如果您的源集群是健康且可访问的,请将mongoshell连接到每个副本集中的主副本集成员,然后运行rs.conf()查看副本配置文档。

If you cannot access one or more components of the source sharded cluster, please reference any existing internal documentation to reconstruct the configuration requirements for each shard replica set and the config server replica set.如果您无法访问源分片集群的一个或多个组件,请参考任何现有的内部文档来重建每个分片副本集和配置服务器副本集的配置要求。

B. Prepare the Target Host for Restoration为恢复准备目标主机

Storage Space Requirements存储空间要求
Ensure the target host hardware has sufficient open storage space for the restored data. If the target host contains existing sharded cluster data that you want to keep, ensure that you have enough storage space for both the existing data and the restored data.请确保目标主机硬件有足够的开放存储空间用于恢复的数据。如果目标主机包含要保留的现有分片集群数据,请确保您有足够的存储空间来存储现有数据和恢复的数据。
LVM RequirementsLVM要求
For LVM snapshots, you must have at least one LVM managed volume group and an a logical volume with enough free space for the extracted snapshot data.对于LVM快照,必须至少有一个LVM管理的卷组和一个逻辑卷,该卷具有足够的可用空间用于提取的快照数据。
MongoDB Version RequirementsMongoDB版本要求

Ensure the target host and source host have the same MongoDB Server version. To check the version of MongoDB available on a host machine, run mongod --version from the terminal or shell.确保目标主机和源主机具有相同的MongoDB Server版本。要检查主机上可用的MongoDB版本,请从终端或shell运行mongod --version

For complete documentation on installation, see Install MongoDB.有关安装的完整文档,请参阅安装MongoDB

Shut Down Running MongoDB Processes关闭正在运行的MongoDB进程

If restoring to an existing cluster, shut down the mongod or mongos process on the target host.如果恢复到现有集群,请关闭目标主机上的mongodmongos进程。

For hosts running mongos, connect a mongo shell to the mongos and run db.shutdownServer() from the admin database:

use admin
db.shutdownServer()

For hosts running a mongod, connect a mongo shell to the mongod and run db.hello():

Prepare Data Directory准备数据目录

Create a directory on the target host for the restored database files. 在目标主机上为还原的数据库文件创建一个目录。Ensure that the user that runs the mongod has read, write, and execute permissions for all files and subfolders in that directory:确保运行mongod的用户对该目录中的所有文件和子文件夹都具有读取、写入和执行权限:

sudo mkdir /path/to/mongodb
sudo chown -R mongodb:mongodb /path/to/mongodb
sudo chmod -R 770 /path/to/mongodb

Substitute /path/to/mongodb with the path to the data directory you created. On RHEL / CentOS, Amazon Linux, and SUSE, the default username is mongod.用您创建的数据目录的路径替换/path/to/mongodb。在RHEL/CNTOS、AmazonLinux和SUSE上,默认用户名为mongod

Prepare Log Directory准备日志目录

Create a directory on the target host for the mongod log files. 在目标主机上为mongod日志文件创建一个目录。Ensure that the user that runs the mongod has read, write, and execute permissions for all files and subfolders in that directory:确保运行mongod的用户对该目录中的所有文件和子文件夹都具有读取、写入和执行权限:

sudo mkdir /path/to/mongodb/logs
sudo chown -R mongodb:mongodb /path/to/mongodb/logs
sudo chmod -R 770 /path/to/mongodb/logs

Substitute /path/to/mongodb/logs with the path to the log directory you created. On RHEL / CentOS, Amazon Linux, and SUSE, the default username is mongod./path/to/mongodb/logs替换为您创建的日志目录的路径。在RHEL/CNTOS、AmazonLinux和SUSE上,默认用户名为mongod

Create Configuration File创建配置文件

This procedure assumes starting a mongod with a configuration file.此过程假定使用配置文件启动mongod

Create the configuration file in your preferred location. Ensure that the user that runs the mongod has read and write permissions on the configuration file:在您的首选位置创建配置文件。确保运行mongod的用户对配置文件具有读写权限:

sudo touch /path/to/mongod.conf
sudo chown mongodb:mongodb /path/to/mongodb/mongod.conf
sudo chmod 644 /path/to/mongodb/mongod.conf

On RHEL / CentOS, Amazon Linux, and SUSE, the default username is mongod.RHEL/CNTOS、AmazonLinux和SUSE上,默认用户名为mongod

Open the configuration file in your preferred text editor and modify at it as required by your deployment. 在首选文本编辑器中打开配置文件,并根据部署的需要进行修改。Alternatively, if you have access to the original configuration file for the mongod, copy it to your preferred location on the target host.或者,如果您可以访问mongod的原始配置文件,请将其复制到目标主机上的首选位置。

Important

Validate that your configuration file includes the following settings:验证您的配置文件是否包含以下设置:

  • storage.dbPath must be set to the path to your preferred data directory.必须设置为首选数据目录的路径。
  • systemLog.path must be set to the path to your preferred log directory必须设置为首选日志目录的路径
  • net.bindIp must include the IP address of the host machine.必须包括主机的IP地址。
  • replication.replSetName has the same value across each member in any given replica set.在任何给定副本集中的每个成员上都具有相同的值。
  • sharding.clusterRole has the same value across each member in any given replica set.在任何给定副本集中的每个成员上都具有相同的值。

C. Restore Config Server Replica Set还原配置服务器副本集

1

Restore the CSRS primary mongod data files.

Select the tab that corresponds to your preferred backup method:

  1. Mount the LVM snapshot on the target host machine. The specific steps for mounting an LVM snapshot depends on your LVM configuration.

    The following example assumes an LVM snapshot created using the Create a Snapshot step in the Back Up and Restore with Filesystem Snapshots procedure.

    lvcreate --size 250GB --name mongod-datafiles-snapshot vg0
    gzip -d -c mongod-datafiles-snapshot.gz | dd o/dev/vg0/mongod-datafiles-snapshot
    mount /dev/vg0/mongod-datafiles-snapshot /snap/mongodb

    This example may not apply to all possible LVM configurations. Refer to the LVM documentation for your system for more complete guidance on LVM restoration.

  2. Copy the mongod data files from the snapshot mount to the data directory created in B. Prepare the Target Host for Restoration:

    cp -a /snap/mongodb/path/to/mongodb /path/to/mongodb

    The -a option recursively copies the contents of the source path to the destination path while preserving folder and file permissions.

  3. Comment out or omit the following configuration file settings:

    #replication
    # replSetName: myCSRSName
    #sharding
    # clusterRole: configsvr

    To start the mongod using a configuration file, specify the --config option in the command line specifying the full path to the configuration file.

    mongod --config /path/to/mongodb/mongod.conf

    If you are restoring from a namespace-filtered snapshot, specify the --restore option.

    mongod --config /path/to/mongod/mongod.conf --restore

    If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.

    After the mongod starts, connect to it using the mongo shell.

  1. Make the data files stored in your selected backup medium accessible on the host. This may require mounting the backup volume, opening the backup in a software utility, or using another tool to extract the data to disk. Refer to the documentation for your preferred backup tool for instructions on accessing the data contained in the backup.

  2. Copy the mongod data files from the backup data location to the data directory created in B. Prepare the Target Host for Restoration:

    cp -a /backup/mongodb/path/to/mongodb /path/to/mongodb

    The -a option recursively copies the contents of the source path to the destination path while preserving folder and file permissions.

  3. Comment out or omit the following configuration file settings:

    #replication
    # replSetName: myCSRSName
    #sharding
    # clusterRole: configsvr
  4. To start the mongod using a configuration file, specify the --config option in the command line specifying the full path to the configuration file.

    mongod --config /path/to/mongodb/mongod.conf

    If restoring from a namespace-filtered snapshot, also specify the --restore option.

    mongod --config /path/to/mongod/mongod.conf --restore
    Note

    Cloud Manager or Ops Manager Only

    If performing a manual restoration of a Cloud Manager or Ops Manager backup, you must specify the disableLogicalSessionCacheRefresh server parameter prior to startup.

    mongod --config /path/to/mongodb/mongod.conf \
    --setParameter disableLogicalSessionCacheRefresh=true

    If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.

    After the mongod starts, connect to it using the mongo shell.

2

Drop the local database.

Use db.dropDatabase() to drop the local database:

use local
db.dropDatabase()
3

Insert the filtered file list into the local database.

This step is only required if you are restoring from a namespace-filtered snapshot.

For each shard, locate the filtered file list with the following name format: <shardRsID>-filteredFileList.txt. This file contains a list of JSON objects with the following format:

{
"filename":"file1",
"ns":"sampleDb1.sampleCollection1",
"uuid": "3b241101-e2bb-4255-8caf-4136c566a962"
}

Add each JSON object from each shard file to a new db.systems.collections_to_restore collection in your local database. You can ignore entries with empty ns or uuid fields. When inserting entries, the uuid field must be inserted as type UUID().

4

Run the filtered file restore command.

This step is only required if you are restoring from a namespace-filtered snapshot.

After inserting all entries, run the following commands:

use admin
db.runCommand({"_configsvrRunRestore":1})
5

For any planned or completed shard hostname or replica set name changes, update the metadata in config.shards.

You can skip this step if all of the following are true:

  • No shard member host machine hostname has or will change during this procedure.
  • No shard replica set name has or will change during this procedure.

Issue the following find() method on the shards collection in the Config Database. Replace <shardName> with the name of the shard. By default the shard name is its replica set name. If you added the shard using the addShard command and specified a custom name, you must specify that name to <shardName>.

use config
db.shards.find( { "_id" : "<shardName>" } )

This operation returns a document that resembles the following:

{
"_id" : "shard1",
"host" : "myShardName/alpha.example.net:27018,beta.example.net:27018,charlie.example.net:27018",
"state" : 1
}
Important

The _id value must match the shardName value in the _id : "shardIdentity" document on the corresponding shard. When restoring the shards later in this procedure, validate that the _id field in shards matches the shardName value on the shard.

Use the updateOne() method to update the hosts string to reflect the planned replica set name and hostname list for the shard. For example, the following operation updates the host connection string for the shard with "_id" : "shard1":

db.shards.updateOne(
{ "_id" : "shard1" },
{ $set : { "host" : "myNewShardName/repl1.example.net:27018,repl2.example.net:27018,repl3.example.net:27018" } }
)

Repeat this process until all shard metadata accurately reflects the planned replica set name and hostname list for each shard in the cluster.

Note

If you do not know the shard name, issue the find() method on the shards collection with an empty filter document {}:

use config
db.shards.find({})

Each document in the result set represents one shard in the cluster. For each document, check the host field for a connection string that matches the shard in question, i.e. a matching replica set name and member hostname list. Use the _id of that document in place of <shardName>.

6

Restart the mongod as a new single-node replica set.

Shut down the mongod. Uncomment or add the following configuration file options:

replication
replSetName: myNewCSRSName
sharding
clusterRole: configsvr

If you want to change the replica set name, you must update the replSetName field with the new name before proceeding.

Start the mongod with the updated configuration file:

mongod --config /path/to/mongodb/mongod.conf

If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.

After the mongod starts, connect to it using the mongo shell.

7

Initiate the new replica set.

Initiate the replica set using rs.initiate() with the default settings.

rs.initiate()

Once the operation completes, use rs.status() to check that the member has become the primary.

8

Add additional replica set members.

For each replica set member in the CSRS, start the mongod on its host machine. Once you have started up all remaining members of the cluster successfully, connect a mongo shell to the primary replica set member. From the primary, use the rs.add() method to add each member of the replica set. Include the replica set name as the prefix, followed by the hostname and port of the member's mongod process:

rs.add("config2.example.net:27019")
rs.add("config3.example.net:27019")

If you want to add the member with specific replica member configuration settings, you can pass a document to rs.add() that defines the member hostname and any members settings your deployment requires.

rs.add(
{
"host" : "config2.example.net:27019",
priority: <int>,
votes: <int>,
tags: <int>
}
)

Each new member performs an initial sync to catch up to the primary. Depending on factors such as the amount of data to sync, your network topology and health, and the power of each host machine, initial sync may take an extended period of time to complete.

The replica set may elect a new primary while you add additional members. Use rs.status() to identify which member is the current primary. You can only run rs.add() from the primary.

9

Configure any additional required replication settings.

The rs.reconfig() method updates the replica set configuration based on a configuration document passed in as a parameter. You must run reconfig() against the primary member of the replica set.

Reference the original configuration file output of the replica set as identified in step A. Review Replica Set Configurations and apply settings as needed.

D. Restore Each Shard Replica Set

1

Restore the shard primary mongod data files.

Select the tab that corresponds to your preferred backup method:

  1. Mount the LVM snapshot on the target host machine. The specific steps for mounting an LVM snapshot depends on your LVM configuration.

    The following example assumes an LVM snapshot created using the Create a Snapshot step in the Back Up and Restore with Filesystem Snapshots procedure.

    lvcreate --size 250GB --name mongod-datafiles-snapshot vg0
    gzip -d -c mongod-datafiles-snapshot.gz | dd o/dev/vg0/mongod-datafiles-snapshot
    mount /dev/vg0/mongod-datafiles-snapshot /snap/mongodb

    This example may not apply to all possible LVM configurations. Refer to the LVM documentation for your system for more complete guidance on LVM restoration.

  2. Copy the mongod data files from the snapshot mount to the data directory created in B. Prepare the Target Host for Restoration:

    cp -a /snap/mongodb/path/to/mongodb /path/to/mongodb

    The -a option recursively copies the contents of the source path to the destination path while preserving folder and file permissions.

  3. Comment out or omit the following configuration file settings:

    #replication
    # replSetName: myShardName
    #sharding
    # clusterRole: shardsvr

    To start the mongod using a configuration file, specify the --config option in the command line specifying the full path to the configuration file:

    mongod --config /path/to/mongodb/mongod.conf

    If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.

    After the mongod starts, connect to it using the mongo shell.

  1. Make the data files stored in your selected backup medium accessible on the host. This may require mounting the backup volume, opening the backup in a software utility, or using another tool to extract the data to disk. Refer to the documentation for your preferred backup tool for instructions on accessing the data contained in the backup.

  2. Copy the mongod data files from the backup data location to the data directory created in B. Prepare the Target Host for Restoration:

    cp -a /backup/mongodb/path/to/mongodb /path/to/mongodb

    The -a option recursively copies the contents of the source path to the destination path while preserving folder and file permissions.

  3. Comment out or omit the following configuration file settings:

    #replication
    # replSetName: myShardName
    #sharding
    # clusterRole: shardsvr
  4. To start the mongod using a configuration file, specify the --config option in the command line specifying the full path to the configuration file:

    mongod --config /path/to/mongodb/mongod.conf
    Note

    Cloud Manager or Ops Manager Only

    If performing a manual restoration of a Cloud Manager or Ops Manager backup, you must specify the disableLogicalSessionCacheRefresh server parameter prior to startup:

    mongod --config /path/to/mongodb/mongod.conf \
    --setParameter disableLogicalSessionCacheRefresh=true

    If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.

    After the mongod starts, connect to it using the mongo shell.

2

Create a temporary user with the __system role.

During this procedure you will modify documents in the admin.system.version collection. For clusters enforcing authentication, only the __system role grants permission to modify this collection. You can skip this step if the cluster does not enforce authentication.

Warning

The __system role entitles its holder to take any action against any object in the database. This procedure includes instructions for removing the user created in this step. Do not keep this user active beyond the scope of this procedure.

Consider creating this user with the clientSource authentication restriction configured such that only the specified hosts can authenticate as the privileged user.

  1. Authenticate as a user with the userAdmin role on the admin database or userAdminAnyDatabase role:

    use admin
    db.auth("myUserAdmin","mySecurePassword")
  2. Create a user with the __system role:

    db.createUser(
    {
    user: "mySystemUser",
    pwd: "<replaceMeWithAStrongPassword>",
    roles: [ "__system" ]
    }
    )

    Passwords should be random, long, and complex to ensure system security and to prevent or delay malicious access.

  3. Authenticate as the privileged user:

    db.auth("mySystemUser","<replaceMeWithAStrongPassword>")
3

Drop the local database.

Use db.dropDatabase() to drop the local database:

use local
db.dropDatabase()
4

Remove the minOpTimeRecovery document from the admin.system.versions collection.

To update the sharding internals, issue the following deleteOne() method on the system.version collection in the admin database:

use admin
db.system.version.deleteOne( { _id: "minOpTimeRecovery" } )
Note

The system.version collection is an internal, system collection. You should only modify it when when given specific instructions like these.

5

Optional: For any CSRS hostname or replica set name changes, update shard metadata in each shard's identity document.

You can skip this step if all of the following are true:

  • The hostnames for any CSRS host did not change during this procedure.
  • The CSRS replica set name did not change during this procedure.

The system.version collection on the admin database contains metadata related to the shard, including the CSRS connection string. If either the CSRS name or any member hostnames changed while restoring the CSRS, you must update this metadata.

Issue the following find() method on the system.version collection in the admin database:

use admin
db.system.version.find( {"_id" : "shardIdentity" } )

The find() method returns a document that resembles the following:

{
"_id" : "shardIdentity",
"clusterId" : ObjectId("2bba123c6eeedcd192b19024"),
"shardName" : "shard1",
"configsvrConnectionString" : "myCSRSName/alpha.example.net:27019,beta.example.net:27019,charlie.example.net:27019" }

The following updateOne() method updates the document such that the host string represents the most current CSRS connection string:

db.system.version.updateOne(
{ "_id" : "shardIdentity" },
{ $set :
{ "configsvrConnectionString" : "myNewCSRSName/config1.example.net:27019,config2.example.net:27019,config3.example.net:27019"}
}
)
Important

The shardName value must match the _id value in the shards collection on the CSRS. Validate that the metadata on the CSRS match the metadata for the shard. Refer to substep 3 in the C. Restore Config Server Replica Set portion of this procedure for instructions on viewing the CSRS metadata.

6

Restart the mongod as a new single-node replica set.

Shut down the mongod. Uncomment or add the following configuration file options:

replication
replSetName: myNewShardName
sharding
clusterRole: shardsvr

If you want to change the replica set name, you must update the replSetName field with the new name before proceeding.

Start the mongod with the updated configuration file:

mongod --config /path/to/mongodb/mongod.conf

If you have mongod configured to run as a system service, start it using the recommended process for your system service manager.

After the mongod starts, connect to it using the mongo shell.

7

Initiate the new replica set.

Initiate the replica set using rs.initiate() with the default settings.

rs.initiate()

Once the operation completes, use rs.status() to check that the member has become the primary.

8

Add additional replica set members.

For each replica set member in the shard replica set, start the mongod on its host machine. Once you have started up all remaining members of the cluster successfully, connect a mongo shell to the primary replica set member. From the primary, use the rs.add() method to add each member of the replica set. Include the replica set name as the prefix, followed by the hostname and port of the member's mongod process:

rs.add("repl2.example.net:27018")
rs.add("repl3.example.net:27018")

If you want to add the member with specific replica member configuration settings, you can pass a document to rs.add() that defines the member hostname and any members settings your deployment requires.

rs.add(
{
"host" : "repl2.example.net:27018",
priority: <int>,
votes: <int>,
tags: <int>
}
)

Each new member performs an initial sync to catch up to the primary. Depending on factors such as the amount of data to sync, your network topology and health, and the power of each host machine, initial sync may take an extended period of time to complete.

The replica set may elect a new primary while you add additional members. Use rs.status() to identify which member is the current primary. You can only run rs.add() from the primary.

9

Configure any additional required replication settings.

The rs.reconfig() method updates the replica set configuration based on a configuration document passed in as a parameter. You must run reconfig() against the primary member of the replica set.

Reference the original configuration file output of the replica set as identified in step A. Review Replica Set Configurations and apply settings as needed.

10

Remove the temporary privileged user.

For clusters enforcing authentication, remove the privileged user created earlier in this procedure:

  1. Authenticate as a user with the userAdmin role on the admin database or userAdminAnyDatabase role:

    use admin
    db.auth("myUserAdmin","mySecurePassword")
  2. Delete the privileged user:

    db.removeUser("mySystemUser")

E. Restart Each mongos

Restart each mongos in the cluster.

mongos --config /path/to/config/mongos.conf

Include all other command line options as required by your deployment.

If the CSRS replica set name or any member hostname changed, update the mongos configuration file setting sharding.configDB with updated configuration server connection string:

sharding:
configDB: "myNewCSRSName/config1.example.net:27019,config2.example.net:27019,config3.example.net:27019"

F. Validate Cluster Accessibility

Connect a mongo shell to one of the mongos processes for the cluster. Use sh.status() to check the overall cluster status. If sh.status() indicates that the balancer is not running, use sh.startBalancer() to restart the balancer. [1]

To confirm that all shards are accessible and communicating, insert test data into a temporary sharded collection. Confirm that data is being split and migrated between each shard in your cluster. You can connect a mongo shell to each shard primary and use db.collection.find() to validate that the data was sharded as expected.

[1] Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.In MongoDB versions earlier than 6.1, sh.startBalancer() also enables auto-splitting for the sharded cluster.