On this page本页内容
Changed in version 3.4.在版本3.4中更改。
mongos
instances to the primary member of the config server replica set.mongos
实例移动到配置服务器副本集的主要成员。
This page describes common administrative procedures related to balancing. 本页介绍与平衡相关的常见管理程序。For an introduction to balancing, see Sharded Cluster Balancer. 有关平衡的介绍,请参阅分片集群平衡器。For lower level information on balancing, see Cluster Balancer.有关平衡的较低级别信息,请参阅群集平衡器。
sh.getBalancerState()
checks if the balancer is enabled (i.e. that the balancer is permitted to run). 检查平衡器是否已启用(即允许平衡器运行)。sh.getBalancerState()
does not check if the balancer is actively balancing chunks.不检查平衡器是否正在主动平衡块。
To see if the balancer is enabled in your sharded cluster, issue the following command, which returns a boolean:要查看分片集群中是否启用了平衡器,请发出以下命令,该命令返回布尔值:
sh.getBalancerState()
You can also see if the balancer is enabled using 您还可以使用sh.status()
. sh.status()
查看是否启用了平衡器。The currently-enabled
field indicates whether the balancer is enabled, while the currently-running
field indicates if the balancer is currently running.currently-enabled
字段指示平衡器是否已启用,而currently-running
字段指示该平衡器当前是否正在运行。
To see if the balancer process is active in your cluster:要查看集群中的平衡器进程是否处于活动状态,请执行以下操作:
The default chunk size for a sharded cluster is 128 megabytes. 分片集群的默认块大小为128兆字节。In most situations, the default size is appropriate for splitting and migrating chunks. 在大多数情况下,默认大小适用于分割和迁移块。For information on how chunk size affects deployments, see details, see Chunk Size.有关块大小如何影响部署的信息,请参阅详细信息,请参阅块大小。
Changing the default chunk size affects chunks that are processes during migrations and auto-splits but does not retroactively affect all chunks.更改默认块大小会影响迁移和自动拆分期间的进程块,但不会追溯影响所有块。
To configure default chunk size, see Modify Chunk Size in a Sharded Cluster.要配置默认区块大小,请参阅修改分片群集中的区块大小。
In some situations, particularly when your data set grows slowly and a migration can impact performance, it is useful to ensure that the balancer is active only at certain times. 在某些情况下,特别是当数据集增长缓慢且迁移可能会影响性能时,确保平衡器仅在特定时间处于活动状态非常有用。The following procedure specifies the 以下过程指定activeWindow
, which is the timeframe during which the balancer will be able to migrate chunks:activeWindow
,这是平衡器将能够迁移块的时间框架:
Issue the following command to switch to the config database.发出以下命令以切换到配置数据库。
use config
stopped
.The balancer will not activate in the 在stopped
state. stopped
状态下,不会激活平衡器。To ensure that the balancer is not 要确保平衡器未停止,请使用stopped
, use sh.startBalancer()
, as in the following:sh.startBalancer()
,如下所示:
sh.startBalancer()
The balancer will not start if you are outside of the 如果您不在activeWindow
timeframe.activeWindow
时间范围内,均衡器将不会启动。
Starting in MongoDB 4.2, 从MongoDB 4.2开始,sh.startBalancer()
also enables auto-splitting for the sharded cluster.sh.startBalancer()
还支持自动分割分片集群。
Set the 使用activeWindow
using updateOne()
, as in the following:updateOne()
设置activeWindow
,如下所示:
db.settings.updateOne( { _id: "balancer" }, { $set: { activeWindow : { start : "<start-time>", stop : "<stop-time>" } } }, { upsert: true } )
Replace 使用指定平衡窗口开始和结束边界的两位数小时和分钟值(即<start-time>
and <end-time>
with time values using two digit hour and minute values (i.e. HH:MM
) that specify the beginning and end boundaries of the balancing window.HH:MM
),将<start-time>
和<end-time>
替换为时间值。
HH
values, use hour values ranging from 00
- 23
.HH
值,请使用00
-23
之间的小时值。MM
value, use minute values ranging from 00
- 59
.MM
值,请使用从00
到59
的分钟值。MongoDB evaluates the start and stop times relative to the time zone of the member which is serving as a primary in the config server replica set.MongoDB评估相对于作为配置服务器副本集中主要成员的成员时区的开始和停止时间。
The balancer window must be sufficient to complete the migration of all data inserted during the day.平衡器窗口必须足以完成当天插入的所有数据的迁移。
As data insert rates can change based on activity and usage patterns, it is important to ensure that the balancing window you select will be sufficient to support the needs of your deployment.由于数据插入率可能会根据活动和使用模式而变化,因此确保您选择的平衡窗口足以支持您的部署需要非常重要。
If you have set the balancing window and wish to remove the schedule so that the balancer is always running, use 如果已设置平衡窗口并希望删除计划,以便平衡器始终运行,请使用$unset
to clear the activeWindow
, as in the following:$unset
清除activeWindow
,如下所示:
use config db.settings.updateOne( { _id : "balancer" }, { $unset : { activeWindow : true } } )
By default, the balancer may run at any time and only moves chunks as needed. 默认情况下,平衡器可以在任何时候运行,并且只根据需要移动块。To disable the balancer for a short period of time and prevent all migration, use the following procedure:要在短时间内禁用平衡器并防止所有迁移,请使用以下步骤:
mongos
in the cluster using the mongo
shell.mongo
shell连接到集群中的任何mongo
。Issue the following operation to disable the balancer:发出以下操作以禁用平衡器:
sh.stopBalancer()
If a migration is in progress, the system will complete the in-progress migration before stopping.如果正在进行迁移,系统将在停止之前完成正在进行的迁移。
Starting in MongoDB 4.2, 从MongoDB 4.2开始,sh.stopBalancer()
also disables auto-splitting for the sharded cluster.sh.stopBalancer()
还禁用分片集群的自动拆分。
To verify that the balancer will not start, issue the following command, which returns 要验证平衡器不会启动,请发出以下命令,如果平衡器被禁用,该命令将返回false
if the balancer is disabled:false
:
sh.getBalancerState()
Optionally, to verify no migrations are in progress after disabling, issue the following operation in the 或者,要在禁用后验证没有迁移正在进行,请在mongo
shell:mongo
shell中发出以下操作:
use config while( sh.isBalancerRunning() ) { print("waiting..."); sleep(1000); }
To disable the balancer from a driver, use the balancerStop command against the 要从驱动程序禁用平衡器,请对管理数据库使用admin
database, as in the following:balancerStop
命令,如下所示:
db.adminCommand( { balancerStop: 1 } )
Use this procedure if you have disabled the balancer and are ready to re-enable it:如果您已禁用平衡器并准备重新启用它,请使用此过程:
mongos
in the cluster using the mongo
shell.mongo
shell连接到集群中的任何mongos
。Issue one of the following operations to enable the balancer:发出以下操作之一以启用平衡器:
From the 来自mongo
shell, issue:mongo
shell,问题:
sh.startBalancer()
To enable the balancer from a driver, use the balancerStart command against the 要从驱动程序启用平衡器,请对管理数据库使用admin
database, as in the following:balancerStart
命令,如下所示:
db.adminCommand( { balancerStart: 1 } )
Starting in MongoDB 4.2, 从MongoDB 4.2开始,sh.startBalancer()
also enables auto-splitting for the sharded cluster.sh.startBalancer()
还为分片集群启用自动拆分。
If MongoDB migrates a chunk during a backup, you can end with an inconsistent snapshot of your sharded cluster. 如果MongoDB在备份过程中迁移块,则可能会以分片集群的不一致快照结束。Never run a backup while the balancer is active. 切勿在平衡器处于活动状态时运行备份。To ensure that the balancer is inactive during your backup operation:要确保在备份操作期间平衡器处于非活动状态,请执行以下操作:
If you turn the balancer off while it is in the middle of a balancing round, the shut down is not instantaneous. 如果在平衡过程中关闭平衡器,则不会立即关闭。The balancer completes the chunk move in-progress and then ceases all further balancing rounds.均衡器完成正在进行的块移动,然后停止所有进一步的平衡循环。
Before starting a backup operation, confirm that the balancer is not active. 开始备份操作之前,请确认平衡器未处于活动状态。You can use the following command to determine if the balancer is active:您可以使用以下命令确定平衡器是否处于活动状态:
!sh.getBalancerState() && !sh.isBalancerRunning()
When the backup procedure is complete you can reactivate the balancer process.备份过程完成后,您可以重新激活平衡器进程。
You can disable balancing for a specific collection with the 可以使用sh.disableBalancing()
method. sh.disableBalancing()
方法禁用特定集合的平衡。You may want to disable the balancer for a specific collection to support maintenance operations or atypical workloads, for example, during data ingestions or data exports.您可能希望禁用特定集合的平衡器,以支持维护操作或非典型工作负载,例如,在数据接收或数据导出期间。
When you disable balancing on a collection, MongoDB will not interrupt in progress migrations.禁用集合上的平衡时,MongoDB不会中断正在进行的迁移。
To disable balancing on a collection, connect to a 要禁用集合上的平衡,请使用mongos
with the mongo
shell and call the sh.disableBalancing()
method.mongo
shell连接到mongos
,然后调用sh.disableBalancing()
方法。
For example:例如:
sh.disableBalancing("students.grades")
The sh.disableBalancing()
method accepts as its parameter the full namespace of the collection.sh.disableBalancing()
方法接受集合的完整名称空间作为其参数。
You can enable balancing for a specific collection with the 可以使用sh.enableBalancing()
method.sh.enableBalancing()
方法为特定集合启用平衡。
When you enable balancing for a collection, MongoDB will not immediately begin balancing data. 当您为集合启用平衡时,MongoDB不会立即开始平衡数据。However, if the data in your sharded collection is not balanced, MongoDB will be able to begin distributing the data more evenly.但是,如果分片集合中的数据不平衡,MongoDB将能够开始更均匀地分布数据。
To enable balancing on a collection, connect to a 要在集合上启用平衡,请使用mongos
with the mongo
shell and call the sh.enableBalancing()
method.mongo
shell连接到mongos
并调用sh.enableBalancing()
方法。
For example:例如:
sh.enableBalancing("students.grades")
The sh.enableBalancing()
method accepts as its parameter the full namespace of the collection.sh.enableBalancing()
方法接受集合的完整名称空间作为其参数。
To confirm whether balancing for a collection is enabled or disabled, query the 要确认是否启用或禁用了集合的平衡,请在collections
collection in the config
database for the collection namespace and check the noBalance
field. config
数据库中查询collections
集合的集合名称空间,并检查noBalance
字段。For Example:例如
db.getSiblingDB("config").collections.findOne({_id : "students.grades"}).noBalance;
This operation will return a null error, 此操作将返回true
, false
, or no output:null
错误、true
、false
或无输出:
true
, balancing is disabled.false
, balancing is enabled currently but has been disabled in the past for the collection. false
,则当前已启用平衡,但在过去已禁用该集合的平衡。You can also see if the balancer is enabled using 您还可以查看是否使用sh.status()
. sh.status()
启用了平衡器。The currently-enabled
field indicates if the balancer is enabled.currently-enabled
字段指示平衡器是否已启用。
During chunk migration, the 在块迁移过程中,_secondaryThrottle
value determines when the migration proceeds with next document in the chunk._secondaryThrottle
值决定何时继续迁移块中的下一个文档。
In the 在config.settings
collection:config.settings
集合中:
_secondaryThrottle
setting for the balancer is set to a write concern, each document move during chunk migration must receive the requested acknowledgement before proceeding with the next document._secondaryThrottle
设置设置为写问题,那么在块迁移期间的每个文档移动都必须在继续处理下一个文档之前收到请求的确认。_secondaryThrottle
setting for the balancer is set to true
, each document move during chunk migration must receive acknowledgement from at least one secondary before the migration proceeds with the next document in the chunk. _secondaryThrottle
设置设置为true
,则块迁移期间的每个文档移动都必须收到至少一个辅助服务器的确认,然后才能继续迁移块中的下一个文档。{ w: 2 }
.{ w: 2 }
的写关注点。If the 如果未设置_secondaryThrottle
setting is unset, the migration process does not wait for replication to a secondary and instead continues with the next document._secondaryThrottle
设置,迁移过程不会等待复制到辅助文件,而是继续下一个文档。
Default behavior for WiredTiger starting in MongoDB 3.4.从MongoDB 3.4开始的WiredTiger的默认行为。
To change the 要更改_secondaryThrottle
setting, connect to a mongos
instance and directly update the _secondaryThrottle
value in the settings
collection of the config database. _secondaryThrottle
设置,请连接到mongos
实例,并直接更新config
数据库设置集合中的_secondaryThrottle
值。For example, from a 例如,从连接到mongo
shell connected to a mongos
, issue the following command:mongos
的mongo
shell发出以下命令:
use config db.settings.updateOne( { "_id" : "balancer" }, { $set : { "_secondaryThrottle" : { "w": "majority" } } }, { upsert : true } )
The effects of changing the 改变_secondaryThrottle
setting may not be immediate. _secondaryThrottle
设置的效果可能不会立即生效。To ensure an immediate effect, stop and restart the balancer to enable the selected value of 要确保立即生效,请停止并重新启动平衡器,以启用_secondaryThrottle
._secondaryThrottle
的选定值。
For more information on the replication behavior during various steps of chunk migration, see Chunk Migration and Replication.有关区块迁移各个步骤期间的复制行为的更多信息,请参阅区块迁移和复制。
For the 对于moveChunk
command, you can use the command's _secondaryThrottle
and writeConcern
options to specify the behavior during the command. moveChunk
命令,可以使用命令的_secondaryThrottle
和writeConcern
选项指定命令执行期间的行为。For details, see 有关详细信息,请参阅moveChunk
command.moveChunk
命令。
The 平衡器的_waitForDelete
setting of the balancer and the moveChunk
command affects how the balancer migrates multiple chunks from a shard. _waitForDelete
设置和moveChunk
命令会影响平衡器如何从分片迁移多个块。By default, the balancer does not wait for the on-going migration's delete phase to complete before starting the next chunk migration. 默认情况下,在开始下一个区块迁移之前,平衡器不会等待正在进行的迁移的删除阶段完成。To have the delete phase block the start of the next chunk migration, you can set the 要使删除阶段阻止下一个块迁移的开始,可以将_waitForDelete
to true._waitForDelete
设置为true
。
For details on chunk migration, see Chunk Migration. 有关区块迁移的详细信息,请参阅区块迁移。For details on the chunk migration queuing behavior, see Asynchronous Chunk Migration Cleanup.有关区块迁移队列行为的详细信息,请参阅异步区块迁移清理。
The _waitForDelete
is generally for internal testing purposes. _waitForDelete
通常用于内部测试。To change the balancer's 要更改平衡器的_waitForDelete
value:_waitForDelete
值,请执行以下操作:
mongos
instance.mongos
实例。Update the 更新_waitForDelete
value in the settings
collection of the config database. config
数据库settings
集合中的_waitForDelete
值。For Example:例如
use config db.settings.updateOne( { "_id" : "balancer" }, { $set : { "_waitForDelete" : true } }, { upsert : true } )
Once set to 设置为true
, to revert to the default behavior:true
后,要还原为默认行为:
mongos
instance.mongos
实例。Update or unset the 更新或取消设置_waitForDelete
field in the settings
collection of the config database:config
数据库settings
集合中的_waitForDelete
字段:
use config db.settings.updateOne( { "_id" : "balancer", "_waitForDelete": true }, { $unset : { "_waitForDelete" : "" } } )
By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size.默认情况下,如果区块中的文档数大于配置区块大小除以平均文档大小所得结果的1.3倍,则MongoDB无法移动区块。
Starting in MongoDB 4.4, by specifying the balancer setting 从MongoDB 4.4开始,通过将平衡器的attemptToBalanceJumboChunks
to true
, the balancer can migrate these large chunks as long as they have not been labeled as jumbo.attemptToBalanceJumboChunks
设置为true
,平衡器可以迁移这些大块,只要它们没有被标记为jumbo
。
To set the balancer's 要设置平衡器的attemptToBalanceJumboChunks
setting, connect to a mongos
instance and directly update the config.settings
collection. attemptToBalanceJumboChunks
设置,请连接到mongos
实例并直接更新config.settings
集合。For example, from a 例如,从连接到mongo
shell connected to a mongos
instance, issue the following command:mongos
实例的mongo
shell发出以下命令:
db.getSiblingDB("config").settings.updateOne( { _id: "balancer" }, { $set: { attemptToBalanceJumboChunks : true } }, { upsert: true } )
When the balancer attempts to move the chunk, if the queue of writes that modify any documents being migrated surpasses 500MB of memory the migration will fail. 当平衡器尝试移动块时,如果修改任何正在迁移的文档的写入队列超过500MB内存,则迁移将失败。For details on the migration procedure, see Chunk Migration Procedure.有关迁移过程的详细信息,请参阅块迁移过程。
If the chunk you want to move is labeled 如果要移动的块标记为jumbo
, you can manually clear the jumbo flag to have the balancer attempt to migrate the chunk.jumbo
,则可以手动清除jumbo标志,让平衡器尝试迁移该块。
Alternatively, you can use the 或者,您可以使用带有moveChunk
command with forceJumbo: true to manually migrate chunks that exceed the size limit (with or without the jumbo
label). forceJumbo:true
的moveChunk
命令手动迁移超过大小限制的块(带或不带jumbo
标签)。However, when you run 但是,当您使用moveChunk
with forceJumbo: true, write operations to the collection may block for a long period of time during the migration.forceJumbo:true
运行moveChunk
时,对集合的写入操作可能会在迁移期间阻塞很长一段时间。
By default shards have no constraints in storage size. 默认情况下,分片在存储大小上没有限制。However, you can set a maximum storage size for a given shard in the sharded cluster. 但是,您可以为分片集群中的给定分片设置最大存储大小。When selecting potential destination shards, the balancer ignores shards where a migration would exceed the configured maximum storage size.在选择潜在的目标分片时,平衡器会忽略迁移将超过配置的最大存储大小的分片。
The shards
collection in the config database stores configuration data related to shards.config
数据库中的shards
集合存储与分片相关的配置数据。
{ "_id" : "shard0000", "host" : "shard1.example.com:27100" } { "_id" : "shard0001", "host" : "shard2.example.com:27200" }
To limit the storage size for a given shard, use the 要限制给定分片的存储大小,请使用带有db.collection.updateOne()
method with the $set
operator to create the maxSize
field and assign it an integer
value. $set
运算符的db.collection.updateOne()
方法创建maxSize字段并为其分配一个整数值。The maxSize
field represents the maximum storage size for the shard in megabytes
.maxSize
字段表示分片的最大存储大小(以MB计)。
The following operation sets a maximum size on a shard of 以下操作将分片的最大大小设置为1024 megabytes
:1024 MB
:
config = db.getSiblingDB("config") config.shards.updateOne( { "_id" : "<shard>"}, { $set : { "maxSize" : 1024 } } )
This value includes the mapped size of all data files on the shard, including the 该值包括分片上所有数据文件的映射大小,包括local
and admin
databases.local
和admin
数据库。
By default, 默认情况下,未指定maxSize
is not specified, allowing shards to consume the total amount of available space on their machines if necessary.maxSize
,允许分片在必要时占用其计算机上的可用空间总量。
You can also set 您还可以在添加分片时设置maxSize
when adding a shard.maxSize
。
To set 要在添加分片时设置maxSize
when adding a shard, set the addShard
command's maxSize
parameter to the maximum size in megabytes
. maxSize
,请将addShard
命令的maxSize
参数设置为最大大小(以MB计)。The following command run in the 在mongo
shell adds a shard with a maximum size of 125 megabytes:mongo
shell中运行的以下命令将添加最大大小为125 MB的分片:
config = db.getSiblingDB("config") config.runCommand( { addshard : "example.net:34008", maxSize : 125 } )