MongoDB Limits and ThresholdsMongoDB限制和阈值

On this page本页内容

This document provides a collection of hard and soft limitations of the MongoDB system.本文档提供了MongoDB系统的硬限制和软限制的集合。

BSON DocumentsBSON文件

BSON Document SizeBSON文档大小

The maximum BSON document size is 16 megabytes.BSON文档的最大大小为16兆字节。

The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. 最大文档大小有助于确保单个文档在传输过程中不会使用过多的RAM或过多的带宽。To store documents larger than the maximum size, MongoDB provides the GridFS API. 为了存储大于最大大小的文档,MongoDB提供了GridFS API。See mongofiles and the documentation for your driver for more information about GridFS.有关GridFS的更多信息,请参阅mongofiles驱动程序的文档。

Nested Depth for BSON DocumentsBSON文档的嵌套深度

MongoDB supports no more than 100 levels of nesting for BSON documents.MongoDB支持不超过100级的BSON文档嵌套

Naming Restrictions命名限制

Database Name Case Sensitivity数据库名称区分大小写

Database names are case-sensitive in MongoDB. 数据库名称在MongoDB中区分大小写。They also have an additional restriction, case cannot be the only difference between database names.它们还有一个附加限制,大小写不能是数据库名称之间的唯一区别。

Example实例

If the database salesDB already exists MongoDB will return an error if if you attempt to create a database named salesdb.如果数据库salesDB已经存在,如果尝试创建名为salesdb的数据库,MongoDB将返回一个错误。

mixedCase = db.getSiblingDB('salesDB')
lowerCase = db.getSiblingDB('salesdb')
mixedCase.retail.insertOne({ "widgets": 1, "price": 50 })

The operation succeeds and insertOne() implicitly creates the SalesDB database.操作成功,insertOne()隐式创建SalesDB数据库。

lowerCase.retail.insertOne({ "widgets": 1, "price": 50 })

The operation fails. 操作失败。insertOne() tries to create a salesdb database and is blocked by the naming restriction. insertOne()尝试创建salesdb数据库,但被命名限制阻止。Database names must differ on more than just case.数据库名称必须不止在大小写上有所不同。

lowerCase.retail.find()

This operation does not return any results because the database names are case sensitive. 此操作不返回任何结果,因为数据库名称区分大小写。There is no error because find() doesn't implicitly create a new database.没有错误,因为find()不会隐式创建新数据库。

Restrictions on Database Names for Windows对Windows数据库名称的限制

For MongoDB deployments running on Windows, database names cannot contain any of the following characters:对于在Windows上运行的MongoDB部署,数据库名称不能包含以下任何字符:

/\. "$*<>:|?

Also database names cannot contain the null character.此外,数据库名称不能包含空字符。

Restrictions on Database Names for Unix and Linux SystemsUnix和Linux系统的数据库名称限制

For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:对于在Unix和Linux系统上运行的MongoDB部署,数据库名称不能包含以下任何字符:

/\. "$

Also database names cannot contain the null character.此外,数据库名称不能包含空字符。

Length of Database Names数据库名称的长度

Database names cannot be empty and must have fewer than 64 characters.数据库名称不能为空,并且必须少于64个字符。

Restriction on Collection Names对集合名称的限制

Collection names should begin with an underscore or a letter character, and cannot:集合名称应以下划线或字母字符开头,并且不能

  • contain the $.包含$
  • be an empty string (e.g. "").为空字符串(如"")。
  • contain the null character.包含空字符。
  • begin with the system. prefix. system.前缀开始。(Reserved for internal use.)(保留供内部使用。)

If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the db.getCollection() method in mongosh or a similar method for your driver.如果集合名称包含特殊字符,如下划线字符,或以数字开头,则要访问集合,请使用mongosh中的db.getCollection()方法或驱动程序的类似方法

Namespace Length:命名空间长度:

  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. 对于featureCompatibilityVersion设置为"4.4"或更高版本,MongoDB将非共享集合和视图的限制提高到255字节,将分片集合的限制提高至235字节。For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),对于集合或视图,命名空间包括数据库名称、点(.)分隔符和集合/视图名称(例如,<database>.<collection>),
  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.对于featureCompatibilityVersion设置为"4.2"或更早版本,未共享集合和视图命名空间的最大长度保持为120字节,而分片集合的最大长度为100字节。
Restrictions on Field Names对字段名称的限制
  • Field names cannot contain the null character.字段名不能包含null字符。
  • The server permits storage of field names that contain dots (.) and dollar signs ($).服务器允许存储包含点(.)和美元符号($)的字段名。
  • MongodB 5.0 adds improved support for the use of ($) and (.) in field names. MongodB 5.0增加了对字段名中使用($)和(.)的改进支持。There are some restrictions. 有一些限制。See Field Name Considerations for more details.有关详细信息,请参阅字段名称注意事项
Restrictions on _id_id的限制

The field name _id is reserved for use as a primary key; its value must be unique in the collection, is immutable, and may be of any type other than an array. 字段名称_id被保留用作主键;其值在集合中必须是唯一的、不可变的,并且可以是数组以外的任何类型。If the _id contains subfields, the subfield names cannot begin with a ($) symbol.如果_id包含子字段,则子字段名称不能以($)符号开头。

Naming Warnings命名警告

Warning警告

Use caution, the issues discussed in this section could lead to data loss or corruption.请小心,本节讨论的问题可能导致数据丢失或损坏。

MongoDB does not support duplicate field namesMongoDB不支持重复的字段名

The MongoDB Query Language is undefined over documents with duplicate field names. 对于具有重复字段名的文档,MongoDB查询语言未定义。BSON builders may support creating a BSON document with duplicate field names. BSON构建器可能支持创建具有重复字段名的BSON文档。While the BSON builder may not throw an error, inserting these documents into MongoDB is not supported even if the insert succeeds. 虽然BSON构建器可能不会抛出错误,但即使插入成功,也不支持将这些文档插入MongoDB。For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion.例如,通过MongoDB驱动程序插入具有重复字段名的BSON文档可能会导致驱动程序在插入之前自动删除重复值。

Import and Export Concerns With Dollar Signs ($) and Periods (.)美元符号($)和句点(.)的导入导出问题

Starting in MongoDB 5.0, document field names can be dollar ($) prefixed and can contain periods (.). 从MongoDB 5.0开始,文档字段名可以以美元($)为前缀,并且可以包含句点(.)。However, mongoimport and mongoexport may not work as expected in some situations with field names that make use of these characters.但是,mongoimportmongoexport在使用这些字符的字段名的某些情况下可能无法正常工作。

MongoDB Extended JSON v2 cannot differentiate between type wrappers and fields that happen to have the same name as type wrappers. MongoDB扩展JSON v2无法区分类型包装器和与类型包装器同名的字段。Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar ($) prefixed keys. 在对应的BSON表示可能包括美元($)前缀键的上下文中,不要使用扩展JSON格式。The DBRef mechanism is an exception to this general rule.DBRef机制是这个一般规则的例外。

There are also restrictions on using mongoimport and mongoexport with periods (.) in field names. 在字段名中使用带句点(.)的mongoimportmongoexport也有限制。Since CSV files use the period (.) to represent data hierarchies, a period (.) in a field name will be misinterpreted as a level of nesting.由于CSV文件使用句点(.)表示数据层次结构,字段名中的句点(.),将被误解为嵌套级别。

Possible Data Loss With Dollar Signs ($) and Periods (.)美元符号($)和句点(.)的可能数据丢失

There is a small chance of data loss when using dollar ($) prefixed field names or field names that contain periods (.) if these field names are used in conjunction with unacknowledged writes (write concern w=0) on servers that are older than MongoDB 5.0.使用美元($)前缀字段名或包含句点(.)的字段名时,如果这些字段名与MongoDB 5.0之前的服务器上的未确认写入(写入关注点w=0)一起使用,则数据丢失的可能性很小。

When running insert, update, and findAndModify commands, drivers that are 5.0 compatible remove restrictions on using documents with field names that are dollar ($) prefixed or that contain periods (.). 当运行insertupdatefindAndModify命令时,与5.0兼容的驱动程序会删除对使用以美元($)为前缀或包含句点(.)的字段名的文档的限制。These field names generated a client-side error in earlier driver versions.这些字段名在早期驱动程序版本中生成了客户端错误。

The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error.无论驱动程序连接到哪个服务器版本,这些限制都将被删除。如果5.0驱动程序将文档发送到较旧的服务器,则文档将被拒绝,而不会发送错误。

Namespaces名称空间

Namespace Length名称空间长度
  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. 对于featureCompatibilityVersion设置为"4.4"或更高版本,MongoDB将非共享集合和视图的限制提高到255字节,将分片集合的限制提高至235字节。For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),对于集合或视图,命名空间包括数据库名称、点(.)分隔符和集合/视图名称(例如,<database>.<collection>),
  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.对于featureCompatibilityVersion设置为"4.2"或更早版本,未共享集合和视图命名空间的最大长度保持为120字节,而分片集合的最大长度为100字节。
Tip提示
See also: 参阅:

Indexes索引

Index Key Limit索引键限制
Note注意
Changed in version 4.2在版本4.2中更改

Starting in version 4.2, MongoDB removes the Index Key Limit for featureCompatibilityVersion (fCV) set to "4.2" or greater.从版本4.2开始,MongoDB删除了featureCompatibilityVersion(fCV)设置为"4.2"或更高的索引键限制

For MongoDB 2.6 through MongoDB versions with fCV set to "4.0" or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than1024 bytes.对于fCV设置为"4.0"或更早版本的MongoDB 2.6到MongoDB版本,索引项的总大小必须小于1024字节,这可能包括取决于BSON类型的结构开销。

When the Index Key Limit applies:索引键限制适用时:

  • MongoDB will not create an index on a collection if the index entry for an existing document exceeds the index key limit.如果现有文档的索引项超过索引键限制,MongoDB将不会在集合上创建索引。
  • Reindexing operations will error if the index entry for an indexed field exceeds the index key limit. 如果索引字段的索引项超过索引键限制,则重新索引操作将出错。Reindexing operations occur as part of the compact command as well as the db.collection.reIndex() method.重新索引操作作为compact命令以及db.collection.reIndex()方法的一部分进行。Because these operations drop all the indexes from a collection and then recreate them sequentially, the error from the index key limit prevents these operations from rebuilding any remaining indexes for the collection.由于这些操作会从集合中删除所有索引,然后依次重新创建它们,因此索引键限制的错误会阻止这些操作为集合重建任何剩余索引。
  • MongoDB will not insert into an indexed collection any document with an indexed field whose corresponding index entry would exceed the index key limit, and instead, will return an error. MongoDB不会在索引集合中插入任何带有索引字段的文档,其相应索引项将超过索引键限制,而是返回一个错误。Previous versions of MongoDB would insert but not index such documents.MongoDB的早期版本将插入但不索引此类文档。
  • Updates to the indexed field will error if the updated value causes the index entry to exceed the index key limit.如果更新的值导致索引项超过索引键限制,则对索引字段的更新将出错。If an existing document contains an indexed field whose index entry exceeds the limit, any update that results in the relocation of that document on disk will error.如果现有文档包含索引项超过限制的索引字段,则导致该文档在磁盘上重新定位的任何更新都将出错。
  • mongorestore and mongoimport will not insert documents that contain an indexed field whose corresponding index entry would exceed the index key limit.不会插入包含索引字段的文档,该字段的相应索引项将超过索引键限制
  • In MongoDB 2.6, secondary members of replica sets will continue to replicate documents with an indexed field whose corresponding index entry exceeds the index key limit on initial sync but will print warnings in the logs.在MongoDB 2.6中,副本集的辅助成员将继续复制带有索引字段的文档,该字段的相应索引项在初始同步时超过索引键限制,但将在日志中打印警告。Secondary members also allow index build and rebuild operations on a collection that contains an indexed field whose corresponding index entry exceeds the index key limit but with warnings in the logs.辅助成员还允许对包含索引字段的集合执行索引生成和重新生成操作,该索引字段的相应索引项超过索引键限制,但日志中有警告。With mixed version replica sets where the secondaries are version 2.6 and the primary is version 2.4, secondaries will replicate documents inserted or updated on the 2.4 primary, but will print error messages in the log if the documents contain an indexed field whose corresponding index entry exceeds the index key limit.对于混合版本副本集,其中辅助副本为版本2.6,主副本为版本1.4,辅助副本将复制在2.4主副本上插入或更新的文档,但如果文档包含索引字段,其相应索引项超过索引键限制,则将在日志中打印错误消息。
  • For existing sharded collections, chunk migration will fail if the chunk has a document that contains an indexed field whose index entry exceeds the index key limit.对于现有的分片集合,如果块具有包含索引项超过索引键限制的索引字段的文档,则块迁移将失败。
Number of Indexes per Collection每个集合的索引数

A single collection can have no more than 64 indexes.单个集合的索引不能超过64个。

Index Name Length索引名称长度
Note注意
Changed in version 4.2在版本4.2中更改

Starting in version 4.2, MongoDB removes the Index Name Length limit for MongoDB versions with featureCompatibilityVersion (fCV) set to "4.2" or greater.从版本4.2开始,MongoDB删除了featureCompatibilityVersion(fCV)设置为"4.2"或更高版本的MongoDB版本的索引名长度限制。

In previous versions of MongoDB or MongoDB versions with fCV set to "4.0" or earlier, fully qualified index names, which include the namespace and the dot separators (i.e. <database name>.<collection name>.$<index name>), cannot be longer than 127 bytes.在MongoDB的早期版本或fCV设置为“4.0”或更早版本的MongoDB版本中,包括名称空间和点分隔符的完全限定索引名(即<database name>.<collection name>.$<index name>)不能超过127字节。

By default, <index name> is the concatenation of the field names and index type. 默认情况下,<index name>是字段名和索引类型的串联。You can explicitly specify the <index name> to the createIndex() method to ensure that the fully qualified index name does not exceed the limit.您可以显式地为createIndex()方法指定<index name>以确保完全限定索引名不超过限制。

Number of Indexed Fields in a Compound Index复合索引中的索引字段数

There can be no more than 32 fields in a compound index.复合索引中的字段不能超过32个。

Queries cannot use both text and Geospatial Indexes查询不能同时使用文本索引和地理空间索引

You cannot combine the $text query, which requires a special text index, with a query operator that requires a different type of special index. 不能将需要特殊文本索引$text查询与需要不同类型特殊索引的查询运算符组合。For example you cannot combine $text query with the $near operator.例如,不能将$text查询与$near运算符组合。

Fields with 2dsphere Indexes can only hold Geometries具有二维球体索引的字段只能保存几何图形

Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. 具有二维球体索引的字段必须以坐标对GeoJSON数据的形式保存几何数据。If you attempt to insert a document with non-geometry data in a 2dsphere indexed field, or build a 2dsphere index on a collection where the indexed field has non-geometry data, the operation will fail.如果尝试在2dsphere索引字段中插入包含非几何数据的文档,或在索引字段包含非几何体数据的集合上构建二维球体索引,则操作将失败。

Tip提示
See also: 参阅:

The unique indexes limit in Sharding Operational Restrictions.唯一索引限制了分片操作限制

Limited Number of 2dsphere index keys有限数量的2dsphere索引键

To generate keys for a 2dsphere index, mongod maps GeoJSON shapes to an internal representation. 为了生成二维球体索引的键,mongodGeoJSON形状映射到内部表示。The resulting internal representation may be a large array of values.得到的内部表示可以是一个大的值数组。

When mongod generates index keys on a field that holds an array, mongod generates an index key for each array element. mongod在保存数组的字段上生成索引键时,mongod为每个数组元素生成索引键。For compound indexes, mongod calculates the cartesian product of the sets of keys that are generated for each field. 对于复合索引,mongod计算为每个字段生成的键集的笛卡尔积If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits.如果两个集合都很大,则计算笛卡尔积可能会导致操作超出内存限制。

indexMaxNumGeneratedKeysPerDocument limits the maximum number of keys generated for a single document to prevent out of memory errors. 限制为单个文档生成的最大键数,以防止内存不足错误。The default is 100000 index keys per document. 默认值为每个文档100000个索引键。It is possible to raise the limit, but if an operation requires more keys than the indexMaxNumGeneratedKeysPerDocument parameter specifies, the operation will fail.可以提高限制,但如果操作需要的键数超过indexMaxNumGeneratedKeysPerDocument参数指定的键数,则操作将失败。

NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type doubleWiredTiger存储引擎从覆盖查询返回的NaN值始终为double类型

If the value of a field returned from a query that is covered by an index is NaN, the type of that NaN value is always double.如果索引覆盖的查询返回的字段的值为NaN,则该NaN值的类型始终double

Multikey Index多键索引

Multikey indexes多键索引 cannot cover queries over array field(s).无法覆盖对数组字段的查询。

Geospatial Index地理空间索引

Geospatial indexes地理空间索引 cannot cover a query.无法覆盖查询

Memory Usage in Index Builds索引构建中的内存使用

createIndexes supports building one or more indexes on a collection. 支持在集合上构建一个或多个索引。createIndexes uses a combination of memory and temporary files on disk to complete index builds. 使用内存和磁盘上的临时文件的组合来完成索引构建。The default limit on memory usage for createIndexes is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single createIndexes command. createIndexes的内存使用默认限制为200兆字节(对于4.2.3及更高版本)和500兆字节(针对4.2.2及更早版本),在使用单个createIndexes命令构建的所有索引之间共享。Once the memory limit is reached, createIndexes uses temporary disk files in a subdirectory named _tmp within the --dbpath directory to complete the build.一旦达到内存限制,createIndexes将使用--dbpath目录中名为_tmp的子目录中的临时磁盘文件来完成构建。

You can override the memory limit by setting the maxIndexBuildMemoryUsageMegabytes server parameter. 您可以通过设置maxIndexBuildMemoryUsageMegabytes服务器参数来覆盖内存限制。Setting a higher memory limit may result in faster completion of index builds. 设置更高的内存限制可能会导致更快地完成索引构建。However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.但是,相对于系统上未使用的RAM,将此限制设置得太高可能会导致内存耗尽和服务器关闭。

Changed in version 4.2.在版本4.2中更改

Index builds may be initiated either by a user command such as Create Index or by an administrative process such as an initial sync. 索引构建可以由用户命令(如创建索引)或管理过程(如初始同步)启动。Both are subject to the limit set by maxIndexBuildMemoryUsageMegabytes.两者都受maxIndexBuildMemoryUsageMegabytes设置的限制。

An initial sync operation populates only one collection at a time and has no risk of exceeding the memory limit. 初始同步操作一次只填充一个集合,没有超过内存限制的风险。However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set in maxIndexBuildMemoryUsageMegabytes.但是,用户可以同时在多个数据库中的多个集合上启动索引构建,并可能消耗的内存量大于maxIndexBuildMemoryUsageMegabytes中设置的限制。

Tip提示

To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Rolling Index Builds on Replica Sets.要最小化在副本集和具有副本集分片的分片集群上构建索引的影响,请使用在副本集上滚动索引构建中描述的滚动索引构建过程

Collation and Index Types排序规则和索引类型

The following index types only support simple binary comparison and do not support collation:以下索引类型仅支持简单的二进制比较,不支持排序规则

Tip提示

To create a text, a 2d, or a geoHaystack index on a collection that has a non-simple collation, you must explicitly specify {collation: {locale: "simple"} } when creating the index.要在具有非简单排序规则的集合上创建text2dgeoHaystack索引,必须在创建索引时显式指定{collation: {locale: "simple"} }

Hidden Indexes隐藏索引

Sorts

Maximum Number of Sort Keys最大排序键数

You can sort on a maximum of 32 keys.最多可以对32个键进行排序。

Data

Maximum Number of Documents in a Capped Collection封顶集合中文档的最大数量

If you specify the maximum number of documents in a capped collection with create's max parameter, the value must be less than 2 31 documents.如果使用createmax参数指定封顶集合中的最大文档数,则该值必须小于231个文档。

If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.如果在创建有上限的集合时未指定最大文档数,则对文档数没有限制。

Replica Sets复制集

Number of Members of a Replica Set复制集的成员数

Replica sets can have up to 50 members.副本集最多可以有50个成员。

Number of Voting Members of a Replica Set副本集的投票成员数

Replica sets can have up to 7 voting members. 副本集最多可以有7个投票成员。For replica sets with more than 7 total members, see Non-Voting Members.有关成员总数超过7个的副本集,请参阅无表决权成员

Maximum Size of Auto-Created Oplog自动创建的Oplog的最大大小

If you do not explicitly specify an oplog size (i.e. with oplogSizeMB or --oplogSize) MongoDB will create an oplog that is no larger than 50 gigabytes. 如果您没有明确指定oplog大小(即使用oplogSizeMB--oplogSize),MongoDB将创建一个不大于50 GB的oplog。[1]

[1]Starting in MongoDB 4.0, the oplog can grow past its configured size limit to avoid deleting the majority commit point.从MongoDB 4.0开始,oplog可以超过其配置的大小限制,以避免删除多数提交点

Sharded Clusters分片群集

Sharded clusters have the restrictions and thresholds described here.分片集群具有这里描述的限制和阈值。

Sharding Operational Restrictions分片操作限制

Operations Unavailable in Sharded Environments分片环境中不可用的操作

$where does not permit references to the db object from the $where function.不允许从$where函数引用db对象。This is uncommon in un-sharded collections.这在未分割的集合中很少见。

The geoSearch command is not supported in sharded environments.分片环境中不支持geoSearch命令。

Covered Queries in Sharded Clusters分片集群中的覆盖查询

Starting in MongoDB 3.0, an index cannot cover a query on a sharded collection when run against a mongos if the index does not contain the shard key, with the following exception for the _id index: If a query on a sharded collection only specifies a condition on the _id field and returns only the _id field, the _id index can cover the query when run against a mongos even if the _id field is not the shard key.从MongoDB 3.0开始,如果索引不包含分片键,则在对mongos运行时,索引不能覆盖分片集合上的查询,但_id索引例外:如果分片集合的查询只指定了_id字段上的条件,并且只返回_id字段,当对mongos运行时,_id索引可以覆盖查询,即使_id字段不是分片键。

In previous versions, an index cannot cover a query on a sharded collection when run against a mongos.在以前的版本中,当对mongos运行时,索引不能覆盖分片集合上的查询。

Sharding Existing Collection Data Size分片现有集合数据大小

An existing collection can only be sharded if its size does not exceed specific limits. 只有当现有集合的大小不超过特定限制时,才能对其进行分片。These limits can be estimated based on the average size of all shard key values, and the configured chunk size.可以基于所有分片键值的平均大小和配置的大小来估计这些限制。

Important重要

These limits only apply for the initial sharding operation. 这些限制仅适用于初始分片操作。Sharded collections can grow to any size after successfully enabling sharding.成功启用分片后,分片集合可以增长到任何大小。

Use the following formulas to calculate the theoretical maximum collection size.使用以下公式计算理论最大集合大小。

maxSplits = 16777216 (bytes) / <average size of shard key values in bytes>
maxCollectionSize (MB) = maxSplits * (chunkSize / 2)
Note注意

The maximum BSON document size is 16MB or 16777216 bytes.最大BSON文档大小为16MB或16777216字节。

All conversions should use base-2 scale, e.g. 1024 kilobytes = 1 megabyte.所有转换都应使用基数2,例如1024千字节=1兆字节。

If maxCollectionSize is less than or nearly equal to the target collection, increase the chunk size to ensure successful initial sharding. 如果maxCollectionSize小于或几乎等于目标集合,请增加块大小以确保成功进行初始分片。If there is doubt as to whether the result of the calculation is too 'close' to the target collection size, it is likely better to increase the chunk size.如果对计算结果是否过于“接近”目标集合大小有疑问,则最好增加块大小。

After successful initial sharding, you can reduce the chunk size as needed. If you later reduce the chunk size, it may take time for all chunks to split to the new size. 成功进行初始分片后,可以根据需要减小块大小。如果稍后减小块大小,则可能需要时间将所有块拆分为新大小。See Modify Chunk Size in a Sharded Cluster for instructions on modifying chunk size.有关修改块大小的说明,请参阅修改分片集群中的块大小

This table illustrates the approximate maximum collection sizes using the formulas described above:下表说明了使用上述公式的近似最大集合大小:

Average Size of Shard Key Values分片键值的平均大小512 bytes256 bytes128 bytes64 bytes
Maximum Number of Splits最大分割数32,76865,536131,072262,144
Max Collection Size (64 MB Chunk Size)最大集合大小(64 MB块大小)1 TB2 TB4 TB8 TB
Max Collection Size (128 MB Chunk Size)最大集合大小(128 MB块大小)2 TB4 TB8 TB16 TB
Max Collection Size (256 MB Chunk Size)最大集合大小(256 MB块大小)4 TB8 TB16 TB32 TB
Single Document Modification Operations in Sharded Collections分片集合中的单文档修改操作

All update and remove() operations for a sharded collection that specify the justOne or multi: false option must include the shard key or the _id field in the query specification.指定justOnemulti:false选项的分片集合的所有updateremove()操作必须在查询规范中包含分片键或_id字段。

update and remove() operations specifying justOne or multi: false in a sharded collection which do not contain either the shard key or the _id field return an error.updateremove()操作在不包含分片键_id字段的分片集合中指定justOnemulti:false时返回错误。

Unique Indexes in Sharded Collections分片集合中的唯一索引

MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. MongoDB不支持跨分片的唯一索引,除非唯一索引包含完整分片键作为索引前缀。In these situations MongoDB will enforce uniqueness across the full key, not a single field.在这些情况下,MongoDB将在整个密钥中强制唯一性,而不是单个字段。

Tip提示
See: 请参阅:

Unique Constraints on Arbitrary Fields for an alternate approach.用于替代方法的任意字段的唯一约束

Maximum Number of Documents Per Chunk to Migrate要迁移的每个块的最大文档数

By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size. 默认情况下,如果区块中的文档数大于配置区块大小除以平均文档大小的1.3倍,则MongoDB无法移动区块。db.collection.stats() includes the avgObjSize field, which represents the average document size in the collection.包括avgObjSize字段,该字段表示集合中的平均文档大小。

For chunks that are too large to migrate, starting in MongoDB 4.4:对于太大而无法迁移的块,从MongoDB 4.4开始:

  • A new balancer setting attemptToBalanceJumboChunks allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. 一个新的均衡器设置attemptToBalanceJumboChunks允许均衡器迁移太大而无法移动的块,只要这些块没有标记为jumboSee Balance Chunks that Exceed Size Limit for details.有关详细信息,请参阅超出大小限制的平衡块
  • The moveChunk command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. moveChunk命令可以指定一个新选项forceJumbo,以允许迁移太大而无法移动的块。The chunks may or may not be labeled jumbo.这些块可能被标记为jumbo,也可能不被标记为jumbo

Shard Key Limitations分片关键限制

Shard Key Size分片键大小

Starting in version 4.4, MongoDB removes the limit on the shard key size.从版本4.4开始,MongoDB取消了对分片键大小的限制。

For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.对于MongoDB 4.2及更早版本,分片键不能超过512字节。

Shard Key Index Type分片键索引类型

A shard key index can be an ascending index on the shard key, a compound index that start with the shard key and specify ascending order for the shard key, or a hashed index.分片键索引可以是分片键的升序索引、以分片键开始并为分片键指定升序的复合索引或哈希索引

A shard key index cannot be an index that specifies a multikey index, a text index or a geospatial index on the shard key fields.分片键索引不能是指定分片键字段上的多键索引文本索引地理空间索引的索引。

Shard Key Selection is Immutable in MongoDB 4.2 and Earlier分片键选择在MongoDB 4.2及更早版本中是不可变的

Your options for changing a shard key depend on the version of MongoDB that you are running:更改分片键的选项取决于您正在运行的MongoDB版本:

  • Starting in MongoDB 5.0, you can reshard a collection by changing a document's shard key.从MongoDB 5.0开始,您可以通过更改文档的分片键来重新分片集合
  • Starting in MongoDB 4.4, you can refine a shard key by adding a suffix field or fields to the existing shard key.从MongoDB 4.4开始,您可以通过向现有分片键添加后缀字段来优化分片键
  • In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding.在MongoDB 4.2及更早版本中,分片后不能更改分片键的选择。

In MongoDB 4.2 and earlier, to change a shard key:在MongoDB 4.2及更早版本中,要更改分片键:

  • Dump all data from MongoDB into an external format.将MongoDB中的所有数据转储为外部格式。
  • Drop the original sharded collection.删除原始分片集合。
  • Configure sharding using the new shard key.使用新的分片键配置分片。
  • Pre-split the shard key range to ensure initial even distribution.分片键范围,以确保初始均匀分布。
  • Restore the dumped data into MongoDB.将转储的数据恢复到MongoDB中。
Monotonically Increasing Shard Keys Can Limit Insert Throughput单调增加分片键会限制插入吞吐量

For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys can affect insert throughput. 对于具有高插入量的集群,具有单调递增和递减键的分片键可能会影响插入吞吐量。If your shard key is the _id field, be aware that the default values of the _id fields are ObjectIds which have generally increasing values.如果您的分片键是_id字段,请注意_id字段的默认值是通常具有递增值的ObjectId

When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. 当插入具有单调递增的分片键的文档时,所有插入都属于单个分片上的同一The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. 系统最终划分接收所有写操作的块范围,并迁移其内容以更均匀地分布数据。However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.但是,在任何时候,集群都只将插入操作定向到单个分片,这会造成插入吞吐量瓶颈。

If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.如果集群上的操作主要是读取操作和更新,则此限制可能不会影响集群。

To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.要避免此约束,请使用哈希分片键或选择不单调增加或减少的字段。

Hashed shard keys and hashed indexes store hashes of keys with ascending values.哈希分片键哈希索引存储具有升序值的键的散列。

Operations操作

Sort Operations排序操作

If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. 如果MongoDB不能使用一个或多个索引来获得排序顺序,MongoDB必须对数据执行阻塞排序操作。The name refers to the requirement that the SORT stage reads all input documents before returning any output documents, blocking the flow of data for that specific query.名称指的是要求SORT阶段在返回任何输出文档之前读取所有输入文档,从而阻止特定查询的数据流。

If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unlessthe query specifies cursor.allowDiskUse() (New in MongoDB 4.4). 如果MongoDB要求使用超过100 MB的系统内存进行阻塞排序操作,则MongoDB将返回一个错误,除非查询指定了cursor.allowDiskUse()MongoDB 4.4中的新功能)。allowDiskUse() allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.允许MongoDB在处理阻塞排序操作时使用磁盘上的临时文件存储超过100 MB系统内存限制的数据。

Changed in version 4.4.在版本4.4中更改

For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system memory.对于MongoDB 4.2及之前版本,阻塞排序操作不能超过32兆字节的系统内存。

For more information on sorts and index use, see Sort and Index Use.有关排序和索引使用的更多信息,请参阅排序和索引的使用

Aggregation Pipeline Operation聚合管道操作

Each individual pipeline stage has a limit of 100 megabytes of RAM. 每个单独的流水线级的RAM限制为100兆字节。By default, if a stage exceeds this limit, MongoDB produces an error. 默认情况下,如果某个阶段超过此限制,MongoDB将生成错误。For some pipeline stages you can allow pipeline processing to take up more space by using the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files.对于某些管道阶段,可以使用allowDiskUse选项启用聚合管道阶段将数据写入临时文件,从而允许管道处理占用更多空间。

The $search aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.$search聚合阶段不限于100兆字节的RAM,因为它在单独的进程中运行。

Examples of stages that can spill to disk when allowDiskUse is true are:allowDiskUsetrue时可能溢出到磁盘的阶段示例如下:

Note注意

Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.管道阶段对文档流进行操作,每个管道阶段接收文档,处理它们,然后输出结果文档。

Some stages can't output any documents until they have processed all incoming documents. 某些阶段在处理所有传入文档之前无法输出任何文档。These pipeline stages must keep their stage output in RAM until all incoming documents are processed. 这些管道阶段必须将其阶段输出保存在RAM中,直到处理所有传入文档。As a result, these pipeline stages may require more space than the 100 MB limit.因此,这些流水线阶段可能需要比100 MB限制更多的空间。

If the results of one of your $sort pipeline stages exceed the limit, consider adding a $limit stage.如果$sort管道阶段之一的结果超出限制,请考虑添加$limit阶段

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.从MongoDB 4.2开始,探查器日志消息诊断日志消息包括usedDisk指示符,如果任何聚合阶段由于内存限制而将数据写入临时文件。

Aggregation and Read Concern聚合和读取关注
2d Geospatial queries cannot use the $or operator二维地理空间查询不能使用$or运算符
Tip提示
Geospatial Queries地理空间查询

For spherical queries, use the 2dsphere index result.对于球形查询,请使用2dsphere索引结果。

The use of 2d index for spherical queries may lead to incorrect results, such as the use of the 2d index for spherical queries that wrap around the poles.对球形查询使用2d索引可能会导致不正确的结果,例如对环绕极点的球形查询使用2d索引。

Geospatial Coordinates地理空间坐标
  • Valid longitude values are between -180 and 180, both inclusive.有效的经度值介于-180180之间(包括180和180)。
  • Valid latitude values are between -90 and 90, both inclusive.有效的纬度值介于-9090之间(包括这两个值)。
Area of GeoJSON PolygonsGeoJSON多边形的面积

For $geoIntersects or $geoWithin, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the $geometry expression; otherwise, $geoIntersects or $geoWithin queries for the complementary geometry. 对于$geoIntersects$geoWithin,如果指定面积大于单个半球的单个环形多边形,请在$geometry表达式中包含自定义MongoDB坐标参考系;否则,$geoIntersects$geoWithin查询互补几何体。For all other GeoJSON polygons with areas greater than a hemisphere, $geoIntersects or $geoWithin queries for the complementary geometry.对于面积大于半球的所有其他GeoJSON多边形,$geoIntersects$geoWithin查询互补几何体。

Multi-document Transactions多文档事务

For multi-document transactions:对于多文档事务

  • You can specify read/write (CRUD) operations on existingcollections. 您可以对现有集合指定读/写(CRUD)操作。For a list of CRUD operations, see CRUD Operations.有关CRUD操作的列表,请参阅CRUD操作
  • Starting in MongoDB 4.4, you can create collections and indexes in transactions. 从MongoDB 4.4开始,您可以在事务中创建集合和索引。For details, see Create Collections and Indexes In a Transaction有关详细信息,请参阅在事务中创建集合和索引
  • The collections used in a transaction can be in different databases.事务中使用的集合可以位于不同的数据库中。

    Note注意

    You cannot create new collections in cross-shard write transactions. 不能在跨分片写入事务中创建新集合。For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.例如,如果您写入一个分片中的现有集合,并在另一个分片中隐式创建一个集合,则MongoDB无法在同一事务中执行这两个操作。

  • You cannot write to capped collections. (Starting in MongoDB 4.2)您不能写入封顶集合。(从MongoDB 4.2开始)
  • You cannot use read concern "snapshot" when reading from a capped collection. 封顶集合中读取时,不能使用读取关注点"snapshot"(Starting in MongoDB 5.0)(从MongoDB 5.0开始)
  • You cannot read/write to collections in the config, admin, or local databases.您无法读取/写入configadminlocal数据库中的集合。
  • You cannot write to system.* collections.您无法写入system.*集合。
  • You cannot return the supported operation's query plan (i.e. explain).您无法返回支持的操作的查询计划(即explain)。
  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.对于在事务外部创建的游标,不能在事务内部调用getMore
  • For cursors created in a transaction, you cannot call getMore outside the transaction.对于在事务中创建的游标,不能在事务外部调用getMore

Changed in version 4.4.在版本4.4中更改

The following operations are not allowed in transactions:事务中不允许以下操作:

Transactions have a lifetime limit as specified by transactionLifetimeLimitSeconds. 事务具有transactionLifetimeLimitSeconds指定的生存期限制。The default is 60 seconds.默认值为60秒。

Write Command Batch Limit Size写入命令批量限制大小

100,000 writes are allowed in a single batch operation, defined by a single request to the server.在单个批处理操作中允许100000写入,由对服务器的单个请求定义。

Changed in version 3.6.在版本3.6中更改

The limit raises from 1,000 to 100,000 writes. 该限制从1000次写入提高到100000次写入。This limit also applies to legacy OP_INSERT messages.此限制也适用于旧版OP_INSERT消息。

The Bulk() operations in mongosh and comparable methods in the drivers do not have this limit.mongosh中的Bulk()操作和驱动程序中的类似方法没有此限制。

Views视图

The view definition pipeline cannot include the $out or the $merge stage. 视图定义pipeline不能包含$out$merge阶段。If the view definition includes nested pipeline (e.g. the view definition includes $lookup or $facet stage), this restriction applies to the nested pipelines as well.如果视图定义包含嵌套管道(例如,视图定义包含$lookup$facet阶段),则此限制也适用于嵌套管道。

Views have the following operation restrictions:视图具有以下操作限制:

Projection Restrictions投影限制

New in version 4.4.在版本4.4中新增

$-Prefixed Field Path Restriction前缀字段路径限制
Starting in MongoDB 4.4, the find() and findAndModify() projection cannot project a field that starts with $ with the exception of the DBRef fields.从MongoDB 4.4开始,find()findAndModify()投影不能投影以$开头的字段,DBRef字段除外。For example, starting in MongoDB 4.4, the following operation is invalid:例如,从MongoDB 4.4开始,以下操作无效:
db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) // Invalid starting in 4.4
In earlier version, MongoDB ignores the $-prefixed field projections.在早期版本中,MongoDB忽略了带$前缀的字段投影。
$ Positional Operator Placement Restriction位置运算符放置限制
Starting in MongoDB 4.4, the $ projection operator can only appear at the end of the field path; e.g. "field.$" or "fieldA.fieldB.$".从MongoDB 4.4开始,$投影运算符只能出现在字段路径的末尾;例如"field.$""fieldA.fieldB.$"For example, starting in MongoDB 4.4, the following operation is invalid:例如,从MongoDB 4.4开始,以下操作无效:
db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4
To resolve, remove the component of the field path that follows the $ projection operator.要解决此问题,请删除$投影运算符后面的字段路径的组件。In previous versions, MongoDB ignores the part of the path that follows the $; i.e. the projection is treated as "instock.$".在以前的版本中,MongoDB忽略了路径中跟随$;的部分;即,投影被视为"instock.$"
Empty Field Name Projection Restriction空字段名称投影限制
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include a projection of an empty field name.从MongoDB 4.4开始,find()findAndModify()投影不能包含空字段名的投影。For example, starting in MongoDB 4.4, the following operation is invalid:例如,从MongoDB 4.4开始,以下操作无效:
db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.在以前的版本中,MongoDB将空字段的包含/排除视为不存在字段的投影。
Path Collision: Embedded Documents and Its Fields路径冲突:嵌入文档及其字段
Starting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document's fields.从MongoDB 4.4开始,使用嵌入文档的任何字段投影嵌入文档都是非法的。For example, consider a collection inventory with documents that contain a size field:例如,考虑包含大小字段的文档的集合inventory
{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error because it attempts to project both size document and the size.uom field:从MongoDB 4.4开始,以下操作失败,出现“路径冲突”错误,因为它试图同时投影size文档和size.uom字段:
db.inventory.find( {}, { size: 1, "size.uom": 1 } )  // Invalid starting in 4.4
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:在以前的版本中,嵌入文档及其字段之间的lattermost投影决定了投影:
  • If the projection of the embedded document comes after any and all projections of its fields, MongoDB projects the embedded document. 如果嵌入文档的投影在其字段的任何和所有投影之后,MongoDB将投影嵌入文档。For example, the projection document { "size.uom": 1, size: 1 } produces the same result as the projection document { size: 1 }.例如,投影文档{ "size.uom": 1, size: 1 }产生的结果与投影文档{ size: 1 }相同。
  • If the projection of the embedded document comes before the projection any of its fields, MongoDB projects the specified field or fields. 如果嵌入文档的投影在其任何字段的投影之前,MongoDB将投影指定的一个或多个字段。For example, the projection document { "size.uom": 1, size: 1, "size.h": 1 } produces the same result as the projection document { "size.uom": 1, "size.h": 1 }.例如,投影文档{ "size.uom": 1, size: 1, "size.h": 1 }产生的结果与投影文档{ "size.uom": 1, "size.h": 1 }相同。
Path Collision: $slice of an Array and Embedded Fields路径冲突:$slice数组和嵌入字段
Starting in MongoDB 4.4, find() and findAndModify() projection cannot contain both a $slice of an array and a field embedded in the array.从MongoDB 4.4开始,find()findAndModify()投影不能同时包含数组的$slice和数组中嵌入的字段。For example, consider a collection inventory that contains an array field instock:例如,考虑一个包含数组字段instock的集合inventory
{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error:从MongoDB 4.4开始,以下操作失败,出现“路径冲突”错误:
db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4
In previous versions, the projection applies both projections and returns the first element ($slice: 1) in the instock array but suppresses the warehouse field in the projected element. 在以前的版本中,投影应用两个投影,并返回instock数组中的第一个元素($slice:1),但不显示投影元素中的仓库字段。Starting in MongoDB 4.4, to achieve the same result, use the db.collection.aggregate() method with two separate $project stages.从MongoDB 4.4开始,要获得相同的结果,请将db.collection.aggregate()方法与两个单独的$project阶段一起使用。
$ Positional Operator and $slice Restriction$位置运算符和$slice限制
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include $slice projection expression as part of a $ projection expression.从MongoDB 4.4开始,find()findAndModify()投影不能将$slice投影表达式作为$投影表达式的一部分。For example, starting in MongoDB 4.4, the following operation is invalid:例如,从MongoDB 4.4开始,以下操作无效:
db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4
In previous versions, MongoDB returns the first element (instock.$) in the instock array that matches the query condition; i.e. the positional projection "instock.$" takes precedence and the $slice:1 is a no-op. 在以前的版本中,MongoDB返回instock数组中与查询条件匹配的第一个元素(instock.$);即,位置投影"instock.$"优先,而$slice:1是no-op。The "instock.$": {$slice: 1 } does not exclude any other document field."instock.$": {$slice: 1 }不排除任何其他文档字段。

Sessions会话

Sessions and $external Username Limit会话和$external用户名限制

To use Client Sessions and Causal Consistency Guarantees with $external authentication users (Kerberos, LDAP, or x.509 users), usernames cannot be greater than 10k bytes.要对$external身份验证用户(Kerberos、LDAP或x.509用户)使用客户端会话和因果一致性保证,用户名不能大于10k字节。

Session Idle Timeout会话空闲超时

Sessions that receive no read or write operations for 30 minutes orthat are not refreshed using refreshSessions within this threshold are marked as expired and can be closed by the MongoDB server at any time. 30分钟内未接收读写操作或未使用refreshSessions刷新的会话在此阈值内的会话将标记为过期,MongoDB服务器可以随时关闭。Closing a session kills any in-progress operations and open cursors associated with the session. 关闭会话将终止与会话关联的任何正在进行的操作和打开的游标。This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.这包括使用noCursorTimeout()或大于30分钟的maxTimeMS()配置的游标。

Consider an application that issues a db.collection.find(). 考虑一个发出db.collection.find()的应用程序。The server returns a cursor along with a batch of documents defined by the cursor.batchSize() of the find(). 服务器返回一个游标以及由find()cursor.batchSize()定义的一批文档。The session refreshes each time the application requests a new batch of documents from the server. 每次应用程序从服务器请求一批新文档时,会话都会刷新。However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. 但是,如果应用程序处理当前一批文档的时间超过30分钟,则会话将被标记为过期并关闭。When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.当应用程序请求下一批文档时,服务器返回一个错误,因为会话关闭时游标被终止。

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. 对于返回游标的操作,如果游标空闲时间可能超过30分钟,请使用Mongo.startSession()在显式会话中发出操作,并使用refreshSessions命令定期刷新会话。For example:例如:

var session = db.getMongo().startSession()
var sessionId = session.getSessionId().id
var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()
var refreshTimestamp = new Date() // take note of time at operation start
while (cursor.hasNext()) {
  // Check if more than 5 minutes have passed since the last refresh
  if ( (new Date()-refreshTimestamp)/1000 > 300 ) {
    print("refreshing session")
    db.adminCommand({"refreshSessions" : [sessionId]})
    refreshTimestamp = new Date()
  }
  // process cursor normally
}

In the example operation, the db.collection.find() method is associated with an explicit session. 在示例操作中,db.collection.find()方法与显式会话相关联。The cursor is configured with noCursorTimeout() to prevent the server from closing the cursor if idle. 游标配置为noCursorTimeout(),以防止服务器在空闲时关闭游标。The while loop includes a block that uses refreshSessions to refresh the session every 5 minutes. while循环包括一个使用refreshSessions每5分钟刷新一次会话的块。Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.由于会话永远不会超过30分钟的空闲超时,因此游标可以无限期地保持打开状态。

For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.对于MongoDB驱动程序,有关创建会话的说明和语法,请参阅驱动程序文档

←  MongoDB Server ParametersExplain Results →