Docs HomeMongoDB Manual

MongoDB Limits and ThresholdsMongoDB限制和阈值

This document provides a collection of hard and soft limitations of the MongoDB system.本文档提供了MongoDB系统的硬限制和软限制的集合。

BSON DocumentsBSON文档

BSON Document SizeBSON文档大小

The maximum BSON document size is 16 megabytes.BSON文档的最大大小为16兆字节。

The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. 最大文档大小有助于确保单个文档不会使用过多的RAM,或者在传输过程中不会使用过大的带宽。To store documents larger than the maximum size, MongoDB provides the GridFS API. 为了存储大于最大大小的文档,MongoDB提供了GridFSneneneba API。See mongofiles and the documentation for your driver for more information about GridFS.有关GridFS的更多信息,请参阅mongofiles驱动程序的文档。

Nested Depth for BSON DocumentsBSON文档的嵌套深度

MongoDB supports no more than 100 levels of nesting for BSON documents. MongoDB支持BSON文档的嵌套级别不超过100个。Each object or array adds a level.每个对象或数组都会添加一个级别。

Naming Restrictions命名限制

Use of Case in Database Names数据库名称中大小写的使用

Do not rely on case to distinguish between databases. 不要依赖大小写来区分数据库。For example, you cannot use two databases with names like, salesData and SalesData.例如,不能使用名称为salesDataSalesData的两个数据库。

After you create a database in MongoDB, you must use consistent capitalization when you refer to it. 在MongoDB中创建数据库后,在引用数据库时必须使用一致的大写字母。For example, if you create the salesData database, do not refer to it using alternate capitalization such as salesdata or SalesData.例如,如果创建salesData数据库,请不要使用salesdataSalesData等其他大写字母引用它。

Restrictions on Database Names for WindowsWindows数据库名称的限制

For MongoDB deployments running on Windows, database names cannot contain any of the following characters:对于在Windows上运行的MongoDB部署,数据库名称不能包含以下任何字符:

/\. "$*<>:|?

Also database names cannot contain the null character.此外,数据库名称不能包含null字符。

Restrictions on Database Names for Unix and Linux SystemsUnix和Linux系统的数据库名称限制

For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:对于在Unix和Linux系统上运行的MongoDB部署,数据库名称不能包含以下任何字符:

/\. "$

Also database names cannot contain the null character.此外,数据库名称不能包含null字符。

Length of Database Names数据库名称的长度

Database names cannot be empty and must have fewer than 64 characters.数据库名称不能为空,并且必须少于64个字符。

Restriction on Collection Names集合名称限制

Collection names should begin with an underscore or a letter character, and cannot:集合名称应以下划线或字母字符开头,并且不能:

  • contain the $.包含$

  • be an empty string (e.g. "").是一个空字符串(例如"")。

  • contain the null character.包含null字符。

  • begin with the system. prefix. (Reserved for internal use.)system.前缀开头(保留供内部使用。)

If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the db.getCollection() method in mongosh or a similar method for your driver.如果集合名称包含特殊字符,如下划线或以数字开头,则要访问集合,请使用mongosh中的db.getCollection()方法或驱动程序的类似方法

Namespace Length:命名空间长度:

  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. 对于设置为“4.4”或更高版本的featureCompatibilityVersion,MongoDB将未排序集合和视图的限制提高到255字节,将分片集合的限制提高至235字节。For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),对于集合或视图,名称空间包括数据库名称、句点(.)分隔符和集合/视图名称(例如<database>.<collection>),

  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.对于设置为“4.2”或更早版本的featureCompatibilityVersion,未排序集合和视图命名空间的最大长度保持为120字节,而对于分片集合,则保持为100字节。

Restrictions on Field Names字段名称的限制
  • Field names cannot contain the null character.字段名称不能包含null字符。

  • The server permits storage of field names that contain dots (.) and dollar signs ($).服务器允许存储包含点(.)和美元符号($)的字段名。

  • MongodB 5.0 adds improved support for the use of ($) and (.) in field names. MongodB 5.0增加了对字段名称中使用($)和(.)的改进支持。There are some restrictions. See Field Name Considerations for more details.有一些限制。有关更多详细信息,请参阅字段名称注意事项

Restrictions on _id_id的限制

The field name _id is reserved for use as a primary key; its value must be unique in the collection, is immutable, and may be of any type other than an array. 字段名称_id被保留用作主键;它的值在集合中必须是唯一的,是不可变的,并且可以是数组以外的任何类型。If the _id contains subfields, the subfield names cannot begin with a ($) symbol.如果_id包含子字段,则子字段名称不能以($)符号开头。

Naming Warnings命名警告

Warning

Use caution, the issues discussed in this section could lead to data loss or corruption.请注意,本节中讨论的问题可能会导致数据丢失或损坏。

MongoDB does not support duplicate field namesMongoDB不支持重复的字段名

The MongoDB Query Language is undefined over documents with duplicate field names. MongoDB查询语言对于字段名称重复的文档是未定义的。BSON builders may support creating a BSON document with duplicate field names. While the BSON builder may not throw an error, inserting these documents into MongoDB is not supported even if the insert succeeds. BSON构建器可能支持创建具有重复字段名称的BSON文档。虽然BSON构建器可能不会抛出错误,但即使插入成功,也不支持将这些文档插入MongoDB。For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion.例如,通过MongoDB驱动程序插入具有重复字段名称的BSON文档可能会导致驱动程序在插入之前静默地丢弃重复的值。

Import and Export Concerns With Dollar Signs ($) and Periods (.)美元符号($)和句点(.)的进出口问题

Starting in MongoDB 5.0, document field names can be dollar ($) prefixed and can contain periods (.). 从MongoDB 5.0开始,文档字段名称可以是美元($)前缀,并且可以包含句点(.)。However, mongoimport and mongoexport may not work as expected in some situations with field names that make use of these characters.但是,对于使用这些字符的字段名,mongoimportmongoexport在某些情况下可能无法正常工作。

MongoDB Extended JSON v2 cannot differentiate between type wrappers and fields that happen to have the same name as type wrappers. 无法区分类型包装器和恰好与类型包装器同名的字段。Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar ($) prefixed keys. 在相应的BSON表示可能包括以美元($)为前缀的键的上下文中,不要使用扩展JSON格式。The DBRef mechanism is an exception to this general rule.DBRef机制是这个一般规则的一个例外。

There are also restrictions on using mongoimport and mongoexport with periods (.) in field names. 在字段名称中使用带句点(.)的mongoimportmongoexport也有限制。Since CSV files use the period (.) to represent data hierarchies, a period (.) in a field name will be misinterpreted as a level of nesting.由于CSV文件使用句点(.)表示数据层次结构,因此字段名称中的句点(.)将被误解为嵌套级别。

Possible Data Loss With Dollar Signs ($) and Periods (.)美元符号($)和句点(.)可能的数据丢失

There is a small chance of data loss when using dollar ($) prefixed field names or field names that contain periods (.) if these field names are used in conjunction with unacknowledged writes (write concern w=0) on servers that are older than MongoDB 5.0.在MongoDB 5.0以前的服务器上,如果使用以美元($)为前缀的字段名或包含句点(.)的字段名与未确认的写入(写入关注w=0)一起使用,则数据丢失的可能性很小。

When running insert, update, and findAndModify commands, drivers that are 5.0 compatible remove restrictions on using documents with field names that are dollar ($) prefixed or that contain periods (.). 当运行insertupdatefindAndModify命令时,与5.0兼容的驱动程序将删除对使用字段名称以美元($)为前缀或包含句点(.)的文档的限制。These field names generated a client-side error in earlier driver versions.这些字段名称在早期的驱动程序版本中生成了客户端错误。

The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error.无论驱动程序连接到哪个服务器版本,都会删除这些限制。如果5.0驱动程序将文档发送到较旧的服务器,则文档将被拒绝而不会发送错误。

Namespaces命名空间

Namespace Length命名空间长度
  • For featureCompatibilityVersion set to "4.4" or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. 对于设置为“4.4”或更高版本的featureCompatibilityVersion,MongoDB将未排序集合和视图的限制提高到255字节,将分片集合的限制提高至235字节。For a collection or a view, the namespace includes the database name, the dot (.) separator, and the collection/view name (e.g. <database>.<collection>),对于集合或视图,名称空间包括数据库名称、句点(.)分隔符和集合/视图名称(例如<database>.<collection>),

  • For featureCompatibilityVersion set to "4.2" or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.对于设置为"4.2"或更早版本的featureCompatibilityVersion,未排序集合和视图命名空间的最大长度保持为120字节,而对于分片集合,则保持为100字节。

Tip

See also: 另请参阅:

Naming Restrictions命名限制

Indexes索引

Index Key Limit索引键限制
Note

Changed in version 4.2在版本4.2中更改

Starting in version 4.2, MongoDB removes the Index Key Limit for featureCompatibilityVersion (fCV) set to "4.2" or greater.从4.2版本开始,MongoDB删除了featureCompatibilityVersion(fCV)设置为“4.2”或更高版本的索引键限制。

For MongoDB 2.6 through MongoDB versions with fCV set to "4.0" or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than 1024 bytes.对于fCV设置为"4.0"或更早版本的MongoDB 2.6到MongoDB版本,索引项的总大小必须小于1024字节,根据BSON类型,索引项可能包括结构开销。

When the Index Key Limit applies:当应用索引键限制时:

  • MongoDB will not create an index on a collection if the index entry for an existing document exceeds the index key limit.如果现有文档的索引项超过索引键限制,MongoDB将不会在集合上创建索引。

  • Reindexing operations will error if the index entry for an indexed field exceeds the index key limit. 如果索引字段的索引项超过索引键限制,则重新索引操作将出错。Reindexing operations occur as part of the compact command as well as the db.collection.reIndex() method.重新索引操作是compact命令和db.collection.reIndex()方法的一部分。

    Because these operations drop all the indexes from a collection and then recreate them sequentially, the error from the index key limit prevents these operations from rebuilding any remaining indexes for the collection.由于这些操作会从集合中删除所有索引,然后按顺序重新创建它们,因此索引键限制的错误会阻止这些操作为集合重新生成任何剩余索引。

  • MongoDB will not insert into an indexed collection any document with an indexed field whose corresponding index entry would exceed the index key limit, and instead, will return an error. Previous versions of MongoDB would insert but not index such documents.MongoDB不会将任何具有索引字段的文档插入到索引集合中,该字段的相应索引条目将超过索引键限制,相反,它将返回一个错误。MongoDB的早期版本会插入但不会索引此类文档。

  • Updates to the indexed field will error if the updated value causes the index entry to exceed the index key limit.如果更新的值导致索引项超过索引键限制,则对索引字段的更新将出错。

    If an existing document contains an indexed field whose index entry exceeds the limit, any update that results in the relocation of that document on disk will error.如果现有文档包含索引项超过限制的索引字段,则导致该文档在磁盘上重新定位的任何更新都将出错。

  • mongorestore and mongoimport will not insert documents that contain an indexed field whose corresponding index entry would exceed the index key limit.mongorestoremongoimport不会插入包含相应索引项超过索引键限制的索引字段的文档。

  • In MongoDB 2.6, secondary members of replica sets will continue to replicate documents with an indexed field whose corresponding index entry exceeds the index key limit on initial sync but will print warnings in the logs.在MongoDB 2.6中,副本集的辅助成员将继续复制具有索引字段的文档,该字段的相应索引条目在初始同步时超过了索引键限制,但会在日志中打印警告。

    Secondary members also allow index build and rebuild operations on a collection that contains an indexed field whose corresponding index entry exceeds the index key limit but with warnings in the logs.辅助成员还允许对包含索引字段的集合执行索引生成和重新生成操作,该字段的相应索引项超过了索引键限制,但日志中出现警告。

    With mixed version replica sets where the secondaries are version 2.6 and the primary is version 2.4, secondaries will replicate documents inserted or updated on the 2.4 primary, but will print error messages in the log if the documents contain an indexed field whose corresponding index entry exceeds the index key limit.对于混合版本副本集(其中辅助副本为2.6版,主副本为2.4版),辅助副本将复制在2.4主副本上插入或更新的文档,但如果文档包含相应索引项超过索引键限制的索引字段,则会在日志中打印错误消息。

  • For existing sharded collections, chunk migration will fail if the chunk has a document that contains an indexed field whose index entry exceeds the index key limit.对于现有的分片集合,如果区块的文档包含索引项超过索引键限制的索引字段,则区块迁移将失败。

Number of Indexes per Collection每个集合的索引数

A single collection can have no more than 64 indexes.一个集合的索引不能超过64个。

Index Name Length索引名称长度
Note

Changed in version 4.2在版本4.2中更改

Starting in version 4.2, MongoDB removes the Index Name Length limit for MongoDB versions with featureCompatibilityVersion (fCV) set to "4.2" or greater.从4.2版本开始,MongoDB删除了featureCompatibilityVersion(fCV)设置为“4.2”或更高版本的MongoDB版本的索引名称长度限制。

In previous versions of MongoDB or MongoDB versions with fCV set to "4.0" or earlier, fully qualified index names, which include the namespace and the dot separators (i.e. <database name>.<collection name>.$<index name>), cannot be longer than 127 bytes.在MongoDB的早期版本或fCV设置为"4.0"或更早版本的MongoDB版本中,包括命名空间和点分隔符(即<database name>.<collection name>.$<index name>)的完全限定索引名称不能超过127字节。

By default, <index name> is the concatenation of the field names and index type. You can explicitly specify the <index name> to the createIndex() method to ensure that the fully qualified index name does not exceed the limit.默认情况下,<index name>是字段名称和索引类型的串联。您可以显式地为createIndex()方法指定<index name>,以确保完全限定的索引名称不超过限制。

Number of Indexed Fields in a Compound Index复合索引中的索引字段数

There can be no more than 32 fields in a compound index.一个复合索引中的字段不能超过32个。

Queries cannot use both text and Geospatial Indexes查询不能同时使用文本索引和地理空间索引

You cannot combine the $text query, which requires a special text index, with a query operator that requires a different type of special index. For example you cannot combine $text query with the $near operator.

Fields with 2dsphere Indexes can only hold Geometries具有2dsphere索引的字段只能包含几何图形

Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. If you attempt to insert a document with non-geometry data in a 2dsphere indexed field, or build a 2dsphere index on a collection where the indexed field has non-geometry data, the operation will fail.

Tip

See also: 另请参阅:

The unique indexes limit in Sharding Operational Restrictions.分片操作限制中的唯一索引限制。

Limited Number of 2dsphere index keys2dsphere索引键的数量有限

To generate keys for a 2dsphere index, mongod maps GeoJSON shapes to an internal representation. 为了生成2dsphere索引的键,mongodGeoJSON形状映射到内部表示。The resulting internal representation may be a large array of values.所得到的内部表示可以是一个大的值数组。

When mongod generates index keys on a field that holds an array, mongod generates an index key for each array element. mongod在包含数组的字段上生成索引键时,mongod会为每个数组元素生成一个索引键。For compound indexes, mongod calculates the cartesian product of the sets of keys that are generated for each field. 对于复合索引,mongod计算为每个字段生成的键集的笛卡尔乘积If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits.如果两个集合都很大,那么计算笛卡尔乘积可能会导致操作超出内存限制。

indexMaxNumGeneratedKeysPerDocument limits the maximum number of keys generated for a single document to prevent out of memory errors. The default is 100000 index keys per document. It is possible to raise the limit, but if an operation requires more keys than the indexMaxNumGeneratedKeysPerDocument parameter specifies, the operation will fail.

NaN values returned from Covered Queries by the WiredTiger Storage Engine are always of type doubleWiredTiger存储引擎从Covered Queries返回的NaN值始终为double类型

If the value of a field returned from a query that is covered by an index is NaN, the type of that NaN value is always double.如果从索引所覆盖的查询返回的字段的值为NaN,则该NaN值的类型始终double

Multikey Index多键索引

Multikey indexes cannot cover queries over array field(s).多关键字索引无法覆盖对数组字段的查询。

Geospatial Index地理空间索引

Geospatial indexes cannot cover a query.地理空间索引无法覆盖查询

Memory Usage in Index Builds

createIndexes supports building one or more indexes on a collection. createIndexes uses a combination of memory and temporary files on disk to complete index builds. The default limit on memory usage for createIndexes is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single createIndexes command. Once the memory limit is reached, createIndexes uses temporary disk files in a subdirectory named _tmp within the --dbpath directory to complete the build.

You can override the memory limit by setting the maxIndexBuildMemoryUsageMegabytes server parameter. 您可以通过设置maxIndexBuildMemoryUsageMegabytes服务器参数来覆盖内存限制。Setting a higher memory limit may result in faster completion of index builds. 设置更高的内存限制可能会更快地完成索引构建。However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.但是,相对于系统上未使用的RAM,将此限制设置得过高可能会导致内存耗尽和服务器关闭。

Index builds may be initiated either by a user command such as createIndexes or by an administrative process such as an initial sync. 索引构建可以由用户命令(如createIndexes)启动,也可以由管理过程(如初始同步)启动。Both are subject to the limit set by maxIndexBuildMemoryUsageMegabytes.两者都受maxIndexBuildMemoryUsageMegabytes设置的限制。

An initial sync populates only one collection at a time and has no risk of exceeding the memory limit. However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set by maxIndexBuildMemoryUsageMegabytes.

Tip

To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Rolling Index Builds on Replica Sets.

Collation and Index Types排序规则和索引类型

The following index types only support simple binary comparison and do not support collation:

Tip

To create a text, a 2d, or a geoHaystack index on a collection that has a non-simple collation, you must explicitly specify {collation: {locale: "simple"} } when creating the index.

Hidden Indexes隐藏索引

Sorts

Maximum Number of Sort Keys

You can sort on a maximum of 32 keys.

Data

Maximum Number of Documents in a Capped Collection

If you specify the maximum number of documents in a capped collection with create's max parameter, the value must be less than 2 31 documents.

If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.

Replica Sets

Number of Members of a Replica Set

Replica sets can have up to 50 members.

Number of Voting Members of a Replica Set

Replica sets can have up to 7 voting members. For replica sets with more than 7 total members, see Non-Voting Members.

Maximum Size of Auto-Created Oplog

If you do not explicitly specify an oplog size (i.e. with oplogSizeMB or --oplogSize) MongoDB will create an oplog that is no larger than 50 gigabytes. [1]

[1] The oplog can grow past its configured size limit to avoid deleting the majority commit point.

Sharded Clusters

Sharded clusters have the restrictions and thresholds described here.

Sharding Operational Restrictions

Operations Unavailable in Sharded Environments

$where does not permit references to the db object from the $where function. This is uncommon in un-sharded collections.

The geoSearch command is not supported in sharded environments.

In MongoDB 5.0 and earlier, you cannot specify sharded collections in the from parameter of $lookup stages.

Covered Queries in Sharded Clusters

When run on mongos, indexes can only cover queries on sharded collections if the index contains the shard key.

Sharding Existing Collection Data Size

An existing collection can only be sharded if its size does not exceed specific limits. These limits can be estimated based on the average size of all shard key values, and the configured chunk size.

Important

These limits only apply for the initial sharding operation. Sharded collections can grow to any size after successfully enabling sharding.

MongoDB distributes documents in the collection so that each chunk is half full at creation. Use the following formulas to calculate the theoretical maximum collection size.

maxSplits = 16777216 (bytes) / <average size of shard key values in bytes>
maxCollectionSize (MB) = maxSplits * (chunkSize / 2)
Note

The maximum BSON document size is 16MB or 16777216 bytes.

All conversions should use base-2 scale, e.g. 1024 kilobytes = 1 megabyte.

If maxCollectionSize is less than or nearly equal to the target collection, increase the chunk size to ensure successful initial sharding. If there is doubt as to whether the result of the calculation is too 'close' to the target collection size, it is likely better to increase the chunk size.

After successful initial sharding, you can reduce the chunk size as needed. If you later reduce the chunk size, it may take time for all chunks to split to the new size. See Modify Range Size in a Sharded Cluster for instructions on modifying chunk size.

This table illustrates the approximate maximum collection sizes using the formulas described above:

Average Size of Shard Key Values512 bytes256 bytes128 bytes64 bytes
Maximum Number of Splits32,76865,536131,072262,144
Max Collection Size (64 MB Chunk Size)1 TB2 TB4 TB8 TB
Max Collection Size (128 MB Chunk Size)2 TB4 TB8 TB16 TB
Max Collection Size (256 MB Chunk Size)4 TB8 TB16 TB32 TB
Single Document Modification Operations in Sharded Collections

All update and remove() operations for a sharded collection that specify the justOne or multi: false option must include the shard key or the _id field in the query specification.

update and remove() operations specifying justOne or multi: false in a sharded collection which do not contain either the shard key or the _id field return an error.

Unique Indexes in Sharded Collections

MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. In these situations MongoDB will enforce uniqueness across the full key, not a single field.

Tip

See:

Unique Constraints on Arbitrary Fields for an alternate approach.

Maximum Number of Documents Per Range to Migrate

By default, MongoDB cannot move a range if the number of documents in the range is greater than 2 times the result of dividing the configured range size by the average document size. If MongoDB can move a sub-range of a chunk and reduce the size to less than that, the balancer does so by migrating a range. db.collection.stats() includes the avgObjSize field, which represents the average document size in the collection.

For chunks that are too large to migrate:

  • The balancer setting attemptToBalanceJumboChunks allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Ranges that Exceed Size Limit for details.

    When issuing moveRange and moveChunk commands, it's possible to specify the forceJumbo option to allow for the migration of ranges that are too large to move. The ranges may or may not be labeled jumbo.

Shard Key Limitations

Shard Key Size

Starting in version 4.4, MongoDB removes the limit on the shard key size.

For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.

Shard Key Index Type

A shard key index can be an ascending index on the shard key, a compound index that start with the shard key and specify ascending order for the shard key, or a hashed index.

A shard key index cannot be an index that specifies a multikey index, a text index or a geospatial index on the shard key fields.

Shard Key Selection is Immutable in MongoDB 4.2 and Earlier

Your options for changing a shard key depend on the version of MongoDB that you are running:

  • Starting in MongoDB 5.0, you can reshard a collection by changing a document's shard key.

  • Starting in MongoDB 4.4, you can refine a shard key by adding a suffix field or fields to the existing shard key.

  • In MongoDB 4.2 and earlier, the choice of shard key cannot be changed after sharding.

In MongoDB 4.2 and earlier, to change a shard key:

  • Dump all data from MongoDB into an external format.

  • Drop the original sharded collection.

  • Configure sharding using the new shard key.

  • Pre-split the shard key range to ensure initial even distribution.

  • Restore the dumped data into MongoDB.

Monotonically Increasing Shard Keys Can Limit Insert Throughput

For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys can affect insert throughput. If your shard key is the _id field, be aware that the default values of the _id fields are ObjectIds which have generally increasing values.

When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.

If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.

To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.

Hashed shard keys and hashed indexes store hashes of keys with ascending values.

Operations

Sort Operations

If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. The name refers to the requirement that the SORT stage reads all input documents before returning any output documents, blocking the flow of data for that specific query.

If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unless the query specifies cursor.allowDiskUse() (New in MongoDB 4.4). allowDiskUse() allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.

Changed in version 4.4: For MongoDB 4.2 and prior, blocking sort operations could not exceed 32 megabytes of system memory.

For more information on sorts and index use, see Sort and Index Use.

Aggregation Pipeline Operation

Starting in MongoDB 6.0, the allowDiskUseByDefault parameter controls whether pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default.

  • If allowDiskUseByDefault is set to true, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. You can disable writing temporary files to disk for specific find or aggregate commands using the { allowDiskUse: false } option.

  • If allowDiskUseByDefault is set to false, pipeline stages that require more than 100 megabytes of memory to execute raise an error by default. You can enable writing temporary files to disk for specific find or aggregate using the { allowDiskUse: true } option.

The $search aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.

Examples of stages that can write temporary files to disk when allowDiskUse is true are:

Note

Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.

Some stages can't output any documents until they have processed all incoming documents. These pipeline stages must keep their stage output in RAM until all incoming documents are processed. As a result, these pipeline stages may require more space than the 100 MB limit.

If the results of one of your $sort pipeline stages exceed the limit, consider adding a $limit stage.

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.

Aggregation and Read Concern
2d Geospatial queries cannot use the $or operator
Geospatial Queries

For spherical queries, use the 2dsphere index result.

The use of 2d index for spherical queries may lead to incorrect results, such as the use of the 2d index for spherical queries that wrap around the poles.

Geospatial Coordinates
  • Valid longitude values are between -180 and 180, both inclusive.

  • Valid latitude values are between -90 and 90, both inclusive.

Area of GeoJSON Polygons

For $geoIntersects or $geoWithin, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the $geometry expression; otherwise, $geoIntersects or $geoWithin queries for the complementary geometry. For all other GeoJSON polygons with areas greater than a hemisphere, $geoIntersects or $geoWithin queries for the complementary geometry.

Multi-document Transactions

For multi-document transactions:

  • You can specify read/write (CRUD) operations on existing collections. For a list of CRUD operations, see CRUD Operations.

  • Starting in MongoDB 4.4, you can create collections and indexes in transactions. For details, see Create Collections and Indexes In a Transaction

  • The collections used in a transaction can be in different databases.

    Note

    You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.

  • You cannot write to capped collections. (Starting in MongoDB 4.2)

  • You cannot use read concern "snapshot" when reading from a capped collection. (Starting in MongoDB 5.0)

  • You cannot read/write to collections in the config, admin, or local databases.

  • You cannot write to system.* collections.

  • You cannot return the supported operation's query plan (i.e. explain).

  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.

  • For cursors created in a transaction, you cannot call getMore outside the transaction.

Changed in version 4.4.

The following operations are not allowed in transactions:

Transactions have a lifetime limit as specified by transactionLifetimeLimitSeconds. The default is 60 seconds.

Write Command Batch Limit Size

100,000 writes are allowed in a single batch operation, defined by a single request to the server.

Changed in version 3.6: The limit raises from 1,000 to 100,000 writes. This limit also applies to legacy OP_INSERT messages.

The Bulk() operations in mongosh and comparable methods in the drivers do not have this limit.

Views

A view definition pipeline cannot include the $out or the $merge stage. This restriction also applies to embedded pipelines, such as pipelines used in $lookup or $facet stages.

Views have the following operation restrictions:

Projection Restrictions

New in version 4.4:

$-Prefixed Field Path Restriction
Starting in MongoDB 4.4, the find() and findAndModify() projection cannot project a field that starts with $ with the exception of the DBRef fields.For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) // Invalid starting in 4.4
In earlier version, MongoDB ignores the $-prefixed field projections.
$ Positional Operator Placement Restriction
Starting in MongoDB 4.4, the $ projection operator can only appear at the end of the field path, for example "field.$" or "fieldA.fieldB.$".For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4
To resolve, remove the component of the field path that follows the $ projection operator.In previous versions, MongoDB ignores the part of the path that follows the $; i.e. the projection is treated as "instock.$".
Empty Field Name Projection Restriction
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include a projection of an empty field name.For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4
In previous versions, MongoDB treats the inclusion/exclusion of the empty field as it would the projection of non-existing fields.
Path Collision: Embedded Documents and Its Fields
Starting in MongoDB 4.4, it is illegal to project an embedded document with any of the embedded document's fields.For example, consider a collection inventory with documents that contain a size field:
{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error because it attempts to project both size document and the size.uom field:
db.inventory.find( {}, { size: 1, "size.uom": 1 } )  // Invalid starting in 4.4
In previous versions, lattermost projection between the embedded documents and its fields determines the projection:
  • If the projection of the embedded document comes after any and all projections of its fields, MongoDB projects the embedded document. For example, the projection document { "size.uom": 1, size: 1 } produces the same result as the projection document { size: 1 }.

  • If the projection of the embedded document comes before the projection any of its fields, MongoDB projects the specified field or fields. For example, the projection document { "size.uom": 1, size: 1, "size.h": 1 } produces the same result as the projection document { "size.uom": 1, "size.h": 1 }.

Path Collision: $slice of an Array and Embedded Fields
Starting in MongoDB 4.4, find() and findAndModify() projection cannot contain both a $slice of an array and a field embedded in the array.For example, consider a collection inventory that contains an array field instock:
{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... }
Starting in MongoDB 4.4, the following operation fails with a Path collision error:
db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4
In previous versions, the projection applies both projections and returns the first element ($slice: 1) in the instock array but suppresses the warehouse field in the projected element. Starting in MongoDB 4.4, to achieve the same result, use the db.collection.aggregate() method with two separate $project stages.
$ Positional Operator and $slice Restriction
Starting in MongoDB 4.4, find() and findAndModify() projection cannot include $slice projection expression as part of a $ projection expression.For example, starting in MongoDB 4.4, the following operation is invalid:
db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4
In previous versions, MongoDB returns the first element (instock.$) in the instock array that matches the query condition; i.e. the positional projection "instock.$" takes precedence and the $slice:1 is a no-op. The "instock.$": { $slice: 1 } does not exclude any other document field.

Sessions

Sessions and $external Username Limit

To use Client Sessions and Causal Consistency Guarantees with $external authentication users (Kerberos, LDAP, or x.509 users), usernames cannot be greater than 10k bytes.

Session Idle Timeout

Sessions that receive no read or write operations for 30 minutes or that are not refreshed using refreshSessions within this threshold are marked as expired and can be closed by the MongoDB server at any time. Closing a session kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.

Consider an application that issues a db.collection.find(). The server returns a cursor along with a batch of documents defined by the cursor.batchSize() of the find(). The session refreshes each time the application requests a new batch of documents from the server. However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. For example:例如:

var session = db.getMongo().startSession()
var sessionId = session
sessionId // show the sessionId

var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout()
var refreshTimestamp = new Date() // take note of time at operation start

while (cursor.hasNext()) {

// Check if more than 5 minutes have passed since the last refresh
if ( (new Date()-refreshTimestamp)/1000 > 300 ) {
print("refreshing session")
db.adminCommand({"refreshSessions" : [sessionId]})
refreshTimestamp = new Date()
}

// process cursor normally

}

In the example operation, the db.collection.find() method is associated with an explicit session. The cursor is configured with noCursorTimeout() to prevent the server from closing the cursor if idle. The while loop includes a block that uses refreshSessions to refresh the session every 5 minutes. Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.

For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.