On this page本页内容
This document provides a collection of hard and soft limitations of the MongoDB system.本文档提供了MongoDB系统的硬限制和软限制的集合。
The maximum BSON document size is 16 megabytes.BSON文档的最大大小为16兆字节。
The maximum document size helps ensure that a single document cannot use excessive amount of RAM or, during transmission, excessive amount of bandwidth. 最大文档大小有助于确保单个文档在传输过程中不会使用过多的RAM或过多的带宽。To store documents larger than the maximum size, MongoDB provides the GridFS API. 为了存储大于最大大小的文档,MongoDB提供了GridFS API。See 有关GridFS的更多信息,请参阅mongofiles
and the documentation for your driver for more information about GridFS.mongofiles
和驱动程序的文档。
MongoDB supports no more than 100 levels of nesting for BSON documents.MongoDB支持不超过100级的BSON文档嵌套。
Database names are case-sensitive in MongoDB. 数据库名称在MongoDB中区分大小写。They also have an additional restriction, case cannot be the only difference between database names.它们还有一个附加限制,大小写不能是数据库名称之间的唯一区别。
If the database 如果数据库salesDB
already exists MongoDB will return an error if if you attempt to create a database named salesdb
.salesDB
已经存在,如果尝试创建名为salesdb
的数据库,MongoDB将返回一个错误。
mixedCase = db.getSiblingDB('salesDB') lowerCase = db.getSiblingDB('salesdb') mixedCase.retail.insertOne({ "widgets": 1, "price": 50 })
The operation succeeds and 操作成功,insertOne()
implicitly creates the SalesDB
database.insertOne()
隐式创建SalesDB
数据库。
lowerCase.retail.insertOne({ "widgets": 1, "price": 50 })
The operation fails. 操作失败。insertOne()
tries to create a salesdb
database and is blocked by the naming restriction. insertOne()
尝试创建salesdb数据库,但被命名限制阻止。Database names must differ on more than just case.数据库名称必须不止在大小写上有所不同。
lowerCase.retail.find()
This operation does not return any results because the database names are case sensitive. 此操作不返回任何结果,因为数据库名称区分大小写。There is no error because 没有错误,因为find()
doesn't implicitly create a new database.find()
不会隐式创建新数据库。
For MongoDB deployments running on Windows, database names cannot contain any of the following characters:对于在Windows上运行的MongoDB部署,数据库名称不能包含以下任何字符:
/\. "$*<>:|?
Also database names cannot contain the null character.此外,数据库名称不能包含空字符。
For MongoDB deployments running on Unix and Linux systems, database names cannot contain any of the following characters:对于在Unix和Linux系统上运行的MongoDB部署,数据库名称不能包含以下任何字符:
/\. "$
Also database names cannot contain the null character.此外,数据库名称不能包含空字符。
Database names cannot be empty and must have fewer than 64 characters.数据库名称不能为空,并且必须少于64个字符。
Collection names should begin with an underscore or a letter character, and cannot:集合名称应以下划线或字母字符开头,并且不能:
$
.$
。""
).""
)。system.
prefix. system.
前缀开始。If your collection name includes special characters, such as the underscore character, or begins with numbers, then to access the collection use the 如果集合名称包含特殊字符,如下划线字符,或以数字开头,则要访问集合,请使用db.getCollection()
method in mongosh
or a similar method for your driver.mongosh
中的db.getCollection()
方法或驱动程序的类似方法。
Namespace Length:命名空间长度:
"4.4"
or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. featureCompatibilityVersion
设置为"4.4"
或更高版本,MongoDB将非共享集合和视图的限制提高到255字节,将分片集合的限制提高至235字节。.
) separator, and the collection/view name (e.g. <database>.<collection>
),.
)分隔符和集合/视图名称(例如,<database>.<collection>
),"4.2"
or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.featureCompatibilityVersion
设置为"4.2"
或更早版本,未共享集合和视图命名空间的最大长度保持为120字节,而分片集合的最大长度为100字节。null
character.null
字符。.
) and dollar signs ($
)..
)和美元符号($
)的字段名。$
) and (.
) in field names. $
)和(.
)的改进支持。_id
的限制
The field name 字段名称_id
is reserved for use as a primary key; its value must be unique in the collection, is immutable, and may be of any type other than an array. _id
被保留用作主键;其值在集合中必须是唯一的、不可变的,并且可以是数组以外的任何类型。If the 如果_id
contains subfields, the subfield names cannot begin with a ($
) symbol._id
包含子字段,则子字段名称不能以($
)符号开头。
Use caution, the issues discussed in this section could lead to data loss or corruption.请小心,本节讨论的问题可能导致数据丢失或损坏。
The MongoDB Query Language is undefined over documents with duplicate field names. 对于具有重复字段名的文档,MongoDB查询语言未定义。BSON builders may support creating a BSON document with duplicate field names. BSON构建器可能支持创建具有重复字段名的BSON文档。While the BSON builder may not throw an error, inserting these documents into MongoDB is not supported even if the insert succeeds. 虽然BSON构建器可能不会抛出错误,但即使插入成功,也不支持将这些文档插入MongoDB。For example, inserting a BSON document with duplicate field names through a MongoDB driver may result in the driver silently dropping the duplicate values prior to insertion.例如,通过MongoDB驱动程序插入具有重复字段名的BSON文档可能会导致驱动程序在插入之前自动删除重复值。
$
) and Periods (.
)$
)和句点(.
)的导入导出问题Starting in MongoDB 5.0, document field names can be dollar (从MongoDB 5.0开始,文档字段名可以以美元($
) prefixed and can contain periods (.
). $
)为前缀,并且可以包含句点(.
)。However, 但是,mongoimport
and mongoexport
may not work as expected in some situations with field names that make use of these characters.mongoimport
和mongoexport
在使用这些字符的字段名的某些情况下可能无法正常工作。
MongoDB Extended JSON v2 cannot differentiate between type wrappers and fields that happen to have the same name as type wrappers. MongoDB扩展JSON v2无法区分类型包装器和与类型包装器同名的字段。Do not use Extended JSON formats in contexts where the corresponding BSON representations might include dollar (在对应的BSON表示可能包括美元($
) prefixed keys. $
)前缀键的上下文中,不要使用扩展JSON格式。The DBRef mechanism is an exception to this general rule.DBRef机制是这个一般规则的例外。
There are also restrictions on using 在字段名中使用带句点(mongoimport
and mongoexport
with periods (.
) in field names. .
)的mongoimport
和mongoexport
也有限制。Since CSV files use the period (由于CSV文件使用句点(.
) to represent data hierarchies, a period (.
) in a field name will be misinterpreted as a level of nesting..
)表示数据层次结构,字段名中的句点(.
),将被误解为嵌套级别。
$
) and Periods (.
)$
)和句点(.
)的可能数据丢失There is a small chance of data loss when using dollar (使用美元($
) prefixed field names or field names that contain periods (.
) if these field names are used in conjunction with unacknowledged writes (write concern w=0
) on servers that are older than MongoDB 5.0.$
)前缀字段名或包含句点(.
)的字段名时,如果这些字段名与MongoDB 5.0之前的服务器上的未确认写入(写入关注点w=0
)一起使用,则数据丢失的可能性很小。
When running insert, update, and findAndModify commands, drivers that are 5.0 compatible remove restrictions on using documents with field names that are dollar (当运行$
) prefixed or that contain periods (.
). insert
、update
和findAndModify
命令时,与5.0兼容的驱动程序会删除对使用以美元($
)为前缀或包含句点(.
)的字段名的文档的限制。These field names generated a client-side error in earlier driver versions.这些字段名在早期驱动程序版本中生成了客户端错误。
The restrictions are removed regardless of the server version the driver is connected to. If a 5.0 driver sends a document to an older server, the document will be rejected without sending an error.无论驱动程序连接到哪个服务器版本,这些限制都将被删除。如果5.0驱动程序将文档发送到较旧的服务器,则文档将被拒绝,而不会发送错误。
"4.4"
or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. featureCompatibilityVersion
设置为"4.4"
或更高版本,MongoDB将非共享集合和视图的限制提高到255字节,将分片集合的限制提高至235字节。.
) separator, and the collection/view name (e.g. <database>.<collection>
),.)
分隔符和集合/视图名称(例如,<database>.<collection>
),"4.2"
or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.featureCompatibilityVersion
设置为"4.2"
或更早版本,未共享集合和视图命名空间的最大长度保持为120字节,而分片集合的最大长度为100字节。Starting in version 4.2, MongoDB removes the Index Key Limit for featureCompatibilityVersion (fCV) set to 从版本4.2开始,MongoDB删除了"4.2"
or greater.featureCompatibilityVersion
(fCV)设置为"4.2"
或更高的索引键限制。
For MongoDB 2.6 through MongoDB versions with fCV set to 对于fCV设置为"4.0"
or earlier, the total size of an index entry, which can include structural overhead depending on the BSON type, must be less than1024 bytes."4.0"
或更早版本的MongoDB 2.6到MongoDB版本,索引项的总大小必须小于1024字节,这可能包括取决于BSON类型的结构开销。
When the Index Key Limit applies:当索引键限制适用时:
compact
command as well as the db.collection.reIndex()
method.compact
命令以及db.collection.reIndex()
方法的一部分进行。mongorestore
mongoimport
A single collection can have no more than 64 indexes.单个集合的索引不能超过64个。
Starting in version 4.2, MongoDB removes the Index Name Length limit for MongoDB versions with featureCompatibilityVersion (fCV) set to 从版本4.2开始,MongoDB删除了"4.2"
or greater.featureCompatibilityVersion
(fCV)设置为"4.2"
或更高版本的MongoDB版本的索引名长度限制。
In previous versions of MongoDB or MongoDB versions with fCV set to 在MongoDB的早期版本或fCV设置为“4.0”或更早版本的MongoDB版本中,包括名称空间和点分隔符的完全限定索引名(即"4.0"
or earlier, fully qualified index names, which include the namespace and the dot separators (i.e. <database name>.<collection name>.$<index name>
), cannot be longer than 127 bytes.<database name>.<collection name>.$<index name>
)不能超过127字节。
By default, 默认情况下,<index name>
is the concatenation of the field names and index type. <index name>
是字段名和索引类型的串联。You can explicitly specify the 您可以显式地为<index name>
to the createIndex()
method to ensure that the fully qualified index name does not exceed the limit.createIndex()
方法指定<index name>
以确保完全限定索引名不超过限制。
There can be no more than 32 fields in a compound index.复合索引中的字段不能超过32个。
You cannot combine the 不能将需要特殊文本索引的$text
query, which requires a special text index, with a query operator that requires a different type of special index. $text
查询与需要不同类型特殊索引的查询运算符组合。For example you cannot combine 例如,不能将$text
query with the $near
operator.$text
查询与$near
运算符组合。
Fields with 2dsphere indexes must hold geometry data in the form of coordinate pairs or GeoJSON data. 具有二维球体索引的字段必须以坐标对或GeoJSON数据的形式保存几何数据。If you attempt to insert a document with non-geometry data in a 如果尝试在2dsphere
indexed field, or build a 2dsphere
index on a collection where the indexed field has non-geometry data, the operation will fail.2dsphere
索引字段中插入包含非几何数据的文档,或在索引字段包含非几何体数据的集合上构建二维球体索引,则操作将失败。
The unique indexes limit in Sharding Operational Restrictions.唯一索引限制了分片操作限制。
To generate keys for a 2dsphere index, 为了生成二维球体索引的键,mongod
maps GeoJSON shapes to an internal representation. mongod
将GeoJSON形状映射到内部表示。The resulting internal representation may be a large array of values.得到的内部表示可以是一个大的值数组。
When 当mongod
generates index keys on a field that holds an array, mongod
generates an index key for each array element. mongod
在保存数组的字段上生成索引键时,mongod
为每个数组元素生成索引键。For compound indexes, 对于复合索引,mongod
calculates the cartesian product of the sets of keys that are generated for each field. mongod
计算为每个字段生成的键集的笛卡尔积。If both sets are large, then calculating the cartesian product could cause the operation to exceed memory limits.如果两个集合都很大,则计算笛卡尔积可能会导致操作超出内存限制。
indexMaxNumGeneratedKeysPerDocument
limits the maximum number of keys generated for a single document to prevent out of memory errors. 限制为单个文档生成的最大键数,以防止内存不足错误。The default is 100000 index keys per document. 默认值为每个文档100000个索引键。It is possible to raise the limit, but if an operation requires more keys than the 可以提高限制,但如果操作需要的键数超过indexMaxNumGeneratedKeysPerDocument
parameter specifies, the operation will fail.indexMaxNumGeneratedKeysPerDocument
参数指定的键数,则操作将失败。
NaN
值始终为double
类型
If the value of a field returned from a query that is covered by an index is 如果索引覆盖的查询返回的字段的值为NaN
, the type of that NaN
value is always double
.NaN
,则该NaN
值的类型始终为double
。
Multikey indexes多键索引 cannot cover queries over array field(s).无法覆盖对数组字段的查询。
Geospatial indexes地理空间索引 cannot cover a query.无法覆盖查询。
createIndexes
supports building one or more indexes on a collection. 支持在集合上构建一个或多个索引。createIndexes
uses a combination of memory and temporary files on disk to complete index builds. 使用内存和磁盘上的临时文件的组合来完成索引构建。The default limit on memory usage for createIndexes
is 200 megabytes (for versions 4.2.3 and later) and 500 (for versions 4.2.2 and earlier), shared between all indexes built using a single createIndexes
command. createIndexes
的内存使用默认限制为200兆字节(对于4.2.3及更高版本)和500兆字节(针对4.2.2及更早版本),在使用单个createIndexes
命令构建的所有索引之间共享。Once the memory limit is reached, 一旦达到内存限制,createIndexes
uses temporary disk files in a subdirectory named _tmp
within the --dbpath
directory to complete the build.createIndexes
将使用--dbpath
目录中名为_tmp
的子目录中的临时磁盘文件来完成构建。
You can override the memory limit by setting the 您可以通过设置maxIndexBuildMemoryUsageMegabytes
server parameter. maxIndexBuildMemoryUsageMegabytes
服务器参数来覆盖内存限制。Setting a higher memory limit may result in faster completion of index builds. 设置更高的内存限制可能会导致更快地完成索引构建。However, setting this limit too high relative to the unused RAM on your system can result in memory exhaustion and server shutdown.但是,相对于系统上未使用的RAM,将此限制设置得太高可能会导致内存耗尽和服务器关闭。
Changed in version 4.2.在版本4.2中更改。
"4.2"
, the index build memory limit applies to all index builds."4.0"
, the index build memory limit only applies to foreground index builds.Index builds may be initiated either by a user command such as Create Index or by an administrative process such as an initial sync. 索引构建可以由用户命令(如创建索引)或管理过程(如初始同步)启动。Both are subject to the limit set by 两者都受maxIndexBuildMemoryUsageMegabytes
.maxIndexBuildMemoryUsageMegabytes
设置的限制。
An initial sync operation populates only one collection at a time and has no risk of exceeding the memory limit. 初始同步操作一次只填充一个集合,没有超过内存限制的风险。However, it is possible for a user to start index builds on multiple collections in multiple databases simultaneously and potentially consume an amount of memory greater than the limit set in 但是,用户可以同时在多个数据库中的多个集合上启动索引构建,并可能消耗的内存量大于maxIndexBuildMemoryUsageMegabytes
.maxIndexBuildMemoryUsageMegabytes
中设置的限制。
To minimize the impact of building an index on replica sets and sharded clusters with replica set shards, use a rolling index build procedure as described on Rolling Index Builds on Replica Sets.要最小化在副本集和具有副本集分片的分片集群上构建索引的影响,请使用在副本集上滚动索引构建中描述的滚动索引构建过程。
The following index types only support simple binary comparison and do not support collation:以下索引类型仅支持简单的二进制比较,不支持排序规则:
To create a 要在具有非简单排序规则的集合上创建text
, a 2d
, or a geoHaystack
index on a collection that has a non-simple collation, you must explicitly specify {collation: {locale: "simple"} }
when creating the index.text
、2d
或geoHaystack
索引,必须在创建索引时显式指定{collation: {locale: "simple"} }
。
If you specify the maximum number of documents in a capped collection with 如果使用create
's max
parameter, the value must be less than 2 31 documents.create
的max
参数指定封顶集合中的最大文档数,则该值必须小于231个文档。
If you do not specify a maximum number of documents when creating a capped collection, there is no limit on the number of documents.如果在创建有上限的集合时未指定最大文档数,则对文档数没有限制。
Replica sets can have up to 7 voting members. 副本集最多可以有7个投票成员。For replica sets with more than 7 total members, see Non-Voting Members.有关成员总数超过7个的副本集,请参阅无表决权成员。
If you do not explicitly specify an oplog size (i.e. with 如果您没有明确指定oplog大小(即使用oplogSizeMB
or --oplogSize
) MongoDB will create an oplog that is no larger than 50 gigabytes. oplogSizeMB
或--oplogSize
),MongoDB将创建一个不大于50 GB的oplog。[1]
[1] | majority commit point . |
Sharded clusters have the restrictions and thresholds described here.分片集群具有这里描述的限制和阈值。
$where
does not permit references to the 不允许从db
object from the $where
function.$where
函数引用db
对象。This is uncommon in un-sharded collections.这在未分割的集合中很少见。
The 分片环境中不支持geoSearch命令。geoSearch
command is not supported in sharded environments.
Starting in MongoDB 3.0, an index cannot cover a query on a sharded collection when run against a 从MongoDB 3.0开始,如果索引不包含分片键,则在对mongos
if the index does not contain the shard key, with the following exception for the _id
index: If a query on a sharded collection only specifies a condition on the _id
field and returns only the _id
field, the _id
index can cover the query when run against a mongos
even if the _id
field is not the shard key.mongos
运行时,索引不能覆盖分片集合上的查询,但_id
索引例外:如果分片集合的查询只指定了_id
字段上的条件,并且只返回_id
字段,当对mongos
运行时,_id
索引可以覆盖查询,即使_id
字段不是分片键。
In previous versions, an index cannot cover a query on a sharded collection when run against a 在以前的版本中,当对mongos
.mongos
运行时,索引不能覆盖分片集合上的查询。
An existing collection can only be sharded if its size does not exceed specific limits. 只有当现有集合的大小不超过特定限制时,才能对其进行分片。These limits can be estimated based on the average size of all shard key values, and the configured chunk size.可以基于所有分片键值的平均大小和配置的块大小来估计这些限制。
These limits only apply for the initial sharding operation. 这些限制仅适用于初始分片操作。Sharded collections can grow to any size after successfully enabling sharding.成功启用分片后,分片集合可以增长到任何大小。
Use the following formulas to calculate the theoretical maximum collection size.使用以下公式计算理论最大集合大小。
maxSplits = 16777216 (bytes) / <average size of shard key values in bytes> maxCollectionSize (MB) = maxSplits * (chunkSize / 2)
If 如果maxCollectionSize
is less than or nearly equal to the target collection, increase the chunk size to ensure successful initial sharding. maxCollectionSize
小于或几乎等于目标集合,请增加块大小以确保成功进行初始分片。If there is doubt as to whether the result of the calculation is too 'close' to the target collection size, it is likely better to increase the chunk size.如果对计算结果是否过于“接近”目标集合大小有疑问,则最好增加块大小。
After successful initial sharding, you can reduce the chunk size as needed. If you later reduce the chunk size, it may take time for all chunks to split to the new size. 成功进行初始分片后,可以根据需要减小块大小。如果稍后减小块大小,则可能需要时间将所有块拆分为新大小。See Modify Chunk Size in a Sharded Cluster for instructions on modifying chunk size.有关修改块大小的说明,请参阅修改分片集群中的块大小。
This table illustrates the approximate maximum collection sizes using the formulas described above:下表说明了使用上述公式的近似最大集合大小:
512 bytes | 256 bytes | 128 bytes | 64 bytes | |
---|---|---|---|---|
32,768 | 65,536 | 131,072 | 262,144 | |
1 TB | 2 TB | 4 TB | 8 TB | |
2 TB | 4 TB | 8 TB | 16 TB | |
4 TB | 8 TB | 16 TB | 32 TB |
All 指定update
and remove()
operations for a sharded collection that specify the justOne
or multi: false
option must include the shard key or the _id
field in the query specification.justOne
或multi:false
选项的分片集合的所有update
和remove()
操作必须在查询规范中包含分片键或_id
字段。
update
and remove()
operations specifying justOne
or multi: false
in a sharded collection which do not contain either the shard key or the _id
field return an error.update
和remove()
操作在不包含分片键或_id
字段的分片集合中指定justOne
或multi:false
时返回错误。
MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index. MongoDB不支持跨分片的唯一索引,除非唯一索引包含完整分片键作为索引前缀。In these situations MongoDB will enforce uniqueness across the full key, not a single field.在这些情况下,MongoDB将在整个密钥中强制唯一性,而不是单个字段。
Unique Constraints on Arbitrary Fields for an alternate approach.用于替代方法的任意字段的唯一约束。
By default, MongoDB cannot move a chunk if the number of documents in the chunk is greater than 1.3 times the result of dividing the configured chunk size by the average document size. 默认情况下,如果区块中的文档数大于配置区块大小除以平均文档大小的1.3倍,则MongoDB无法移动区块。db.collection.stats()
includes the 包括avgObjSize
field, which represents the average document size in the collection.avgObjSize
字段,该字段表示集合中的平均文档大小。
For chunks that are too large to migrate, starting in MongoDB 4.4:对于太大而无法迁移的块,从MongoDB 4.4开始:
attemptToBalanceJumboChunks
allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. attemptToBalanceJumboChunks
允许均衡器迁移太大而无法移动的块,只要这些块没有标记为jumbo
。moveChunk
command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. moveChunk
命令可以指定一个新选项forceJumbo
,以允许迁移太大而无法移动的块。jumbo
,也可能不被标记为jumbo
。Starting in version 4.4, MongoDB removes the limit on the shard key size.从版本4.4开始,MongoDB取消了对分片键大小的限制。
For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.对于MongoDB 4.2及更早版本,分片键不能超过512字节。
A shard key index can be an ascending index on the shard key, a compound index that start with the shard key and specify ascending order for the shard key, or a hashed index.分片键索引可以是分片键的升序索引、以分片键开始并为分片键指定升序的复合索引或哈希索引。
A shard key index cannot be an index that specifies a multikey index, a text index or a geospatial index on the shard key fields.分片键索引不能是指定分片键字段上的多键索引、文本索引或地理空间索引的索引。
Your options for changing a shard key depend on the version of MongoDB that you are running:更改分片键的选项取决于您正在运行的MongoDB版本:
In MongoDB 4.2 and earlier, to change a shard key:在MongoDB 4.2及更早版本中,要更改分片键:
For clusters with high insert volumes, a shard key with monotonically increasing and decreasing keys can affect insert throughput. 对于具有高插入量的集群,具有单调递增和递减键的分片键可能会影响插入吞吐量。If your shard key is the 如果您的分片键是_id
field, be aware that the default values of the _id
fields are ObjectIds which have generally increasing values._id
字段,请注意_id
字段的默认值是通常具有递增值的ObjectId。
When inserting documents with monotonically increasing shard keys, all inserts belong to the same chunk on a single shard. 当插入具有单调递增的分片键的文档时,所有插入都属于单个分片上的同一块。The system eventually divides the chunk range that receives all write operations and migrates its contents to distribute data more evenly. 系统最终划分接收所有写操作的块范围,并迁移其内容以更均匀地分布数据。However, at any moment the cluster directs insert operations only to a single shard, which creates an insert throughput bottleneck.但是,在任何时候,集群都只将插入操作定向到单个分片,这会造成插入吞吐量瓶颈。
If the operations on the cluster are predominately read operations and updates, this limitation may not affect the cluster.如果集群上的操作主要是读取操作和更新,则此限制可能不会影响集群。
To avoid this constraint, use a hashed shard key or select a field that does not increase or decrease monotonically.要避免此约束,请使用哈希分片键或选择不单调增加或减少的字段。
Hashed shard keys and hashed indexes store hashes of keys with ascending values.哈希分片键和哈希索引存储具有升序值的键的散列。
If MongoDB cannot use an index or indexes to obtain the sort order, MongoDB must perform a blocking sort operation on the data. 如果MongoDB不能使用一个或多个索引来获得排序顺序,MongoDB必须对数据执行阻塞排序操作。The name refers to the requirement that the 名称指的是要求SORT
stage reads all input documents before returning any output documents, blocking the flow of data for that specific query.SORT
阶段在返回任何输出文档之前读取所有输入文档,从而阻止特定查询的数据流。
If MongoDB requires using more than 100 megabytes of system memory for the blocking sort operation, MongoDB returns an error unlessthe query specifies 如果MongoDB要求使用超过100 MB的系统内存进行阻塞排序操作,则MongoDB将返回一个错误,除非查询指定了cursor.allowDiskUse()
(New in MongoDB 4.4). cursor.allowDiskUse()
(MongoDB 4.4中的新功能)。allowDiskUse()
allows MongoDB to use temporary files on disk to store data exceeding the 100 megabyte system memory limit while processing a blocking sort operation.允许MongoDB在处理阻塞排序操作时使用磁盘上的临时文件存储超过100 MB系统内存限制的数据。
Changed in version 4.4.在版本4.4中更改。
For more information on sorts and index use, see Sort and Index Use.有关排序和索引使用的更多信息,请参阅排序和索引的使用。
Each individual pipeline stage has a limit of 100 megabytes of RAM. 每个单独的流水线级的RAM限制为100兆字节。By default, if a stage exceeds this limit, MongoDB produces an error. 默认情况下,如果某个阶段超过此限制,MongoDB将生成错误。For some pipeline stages you can allow pipeline processing to take up more space by using the allowDiskUse option to enable aggregation pipeline stages to write data to temporary files.对于某些管道阶段,可以使用allowDiskUse
选项启用聚合管道阶段将数据写入临时文件,从而允许管道处理占用更多空间。
The $search
aggregation stage is not restricted to 100 megabytes of RAM because it runs in a separate process.$search
聚合阶段不限于100兆字节的RAM,因为它在单独的进程中运行。
Examples of stages that can spill to disk when allowDiskUse is true
are:allowDiskUse
为true
时可能溢出到磁盘的阶段示例如下:
$bucket
$bucketAuto
$group
$sort
$sortByCount
Pipeline stages operate on streams of documents with each pipeline stage taking in documents, processing them, and then outputing the resulting documents.管道阶段对文档流进行操作,每个管道阶段接收文档,处理它们,然后输出结果文档。
Some stages can't output any documents until they have processed all incoming documents. 某些阶段在处理所有传入文档之前无法输出任何文档。These pipeline stages must keep their stage output in RAM until all incoming documents are processed. 这些管道阶段必须将其阶段输出保存在RAM中,直到处理所有传入文档。As a result, these pipeline stages may require more space than the 100 MB limit.因此,这些流水线阶段可能需要比100 MB限制更多的空间。
If the results of one of your 如果$sort
pipeline stages exceed the limit, consider adding a $limit stage.$sort
管道阶段之一的结果超出限制,请考虑添加$limit
阶段。
Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a 从MongoDB 4.2开始,探查器日志消息和诊断日志消息包括usedDisk
indicator if any aggregation stage wrote data to temporary files due to memory restrictions.usedDisk
指示符,如果任何聚合阶段由于内存限制而将数据写入临时文件。
$out
stage cannot be used in conjunction with read concern "linearizable"
. $out
阶段不能与读取关注点"linearizable"
结合使用。"linearizable"
read concern for db.collection.aggregate()
, you cannot include the $out
stage in the pipeline.db.collection.aggregate()
指定"linearizable"
读取关注点,则不能在管道中包含$out
阶段。$merge
stage cannot be used in conjunction with read concern "linearizable"
. $merge
阶段不能与读取关注点"linearizable"
一起使用。"linearizable"
read concern for db.collection.aggregate()
, you cannot include the $merge
stage in the pipeline.db.collection.aggregate()
指定"linearizable"
读取关注点,则不能在管道中包含$merge
阶段。For spherical queries, use the 对于球形查询,请使用2dsphere
index result.2dsphere
索引结果。
The use of 对球形查询使用2d
index for spherical queries may lead to incorrect results, such as the use of the 2d
index for spherical queries that wrap around the poles.2d
索引可能会导致不正确的结果,例如对环绕极点的球形查询使用2d
索引。
-180
and 180
, both inclusive.-180
和180
之间(包括180和180)。-90
and 90
, both inclusive.-90
和90
之间(包括这两个值)。For 对于$geoIntersects
or $geoWithin
, if you specify a single-ringed polygon that has an area greater than a single hemisphere, include the custom MongoDB coordinate reference system in the $geometry
expression; otherwise, $geoIntersects
or $geoWithin
queries for the complementary geometry. $geoIntersects
或$geoWithin
,如果指定面积大于单个半球的单个环形多边形,请在$geometry
表达式中包含自定义MongoDB坐标参考系;否则,$geoIntersects
或$geoWithin
查询互补几何体。For all other GeoJSON polygons with areas greater than a hemisphere, 对于面积大于半球的所有其他GeoJSON多边形,$geoIntersects
or $geoWithin
queries for the complementary geometry.$geoIntersects
或$geoWithin
查询互补几何体。
For multi-document transactions:对于多文档事务:
The collections used in a transaction can be in different databases.事务中使用的集合可以位于不同的数据库中。
You cannot create new collections in cross-shard write transactions. 不能在跨分片写入事务中创建新集合。For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.例如,如果您写入一个分片中的现有集合,并在另一个分片中隐式创建一个集合,则MongoDB无法在同一事务中执行这两个操作。
"snapshot"
when reading from a capped collection. "snapshot"
。config
, admin
, or local
databases.config
、admin
或local
数据库中的集合。system.*
collections.system.*
集合。explain
).explain
)。getMore
inside the transaction.getMore
。getMore
outside the transaction.getMore
。killCursors
as the first operation in a transaction.killCursors
指定为事务中的第一个操作。Changed in version 4.4.在版本4.4中更改。
The following operations are not allowed in transactions:事务中不允许以下操作:
db.createCollection()
method, and indexes, e.g. db.collection.createIndexes()
and db.collection.createIndex()
methods, when using a read concern level other than "local"
."local"
以外的读取关注级别时,显式创建集合,例如db.createCollection()
方法和索引,例如db.collection.createIndexes()
和db.collection.createIndex()
方法。listCollections
and listIndexes
commands and their helper methods.listCollections
和listIndexes
命令及其助手方法。createUser
, getParameter
, count
, etc. and their helpers.createUser
、getParameter
、count
等及其助手。Transactions have a lifetime limit as specified by 事务具有transactionLifetimeLimitSeconds
. transactionLifetimeLimitSeconds
指定的生存期限制。The default is 60 seconds.默认值为60秒。
在单个批处理操作中允许100,000
writes are allowed in a single batch operation, defined by a single request to the server.100000
次写入,由对服务器的单个请求定义。
Changed in version 3.6.在版本3.6中更改。
1,000
to 100,000
writes. 1000
次写入提高到100000
次写入。OP_INSERT
messages.OP_INSERT
消息。
The Bulk()
operations in mongosh
and comparable methods in the drivers do not have this limit.mongosh
中的Bulk()
操作和驱动程序中的类似方法没有此限制。
The view definition 视图定义pipeline
cannot include the $out
or the $merge
stage. pipeline
不能包含$out
或$merge
阶段。If the view definition includes nested pipeline (e.g. the view definition includes 如果视图定义包含嵌套管道(例如,视图定义包含$lookup
or $facet
stage), this restriction applies to the nested pipelines as well.$lookup
或$facet
阶段),则此限制也适用于嵌套管道。
Views have the following operation restrictions:视图具有以下操作限制:
find()
$geoNear
pipeline stage).$geoNear
管道阶段)。New in version 4.4.在版本4.4中新增。
$
-find()
and findAndModify()
projection cannot project a field that starts with $
with the exception of the DBRef fields.find()
和findAndModify()
投影不能投影以$
开头的字段,DBRef字段除外。db.inventory.find( {}, { "$instock.warehouse": 0, "$item": 0, "detail.$price": 1 } ) // Invalid starting in 4.4
$
-prefixed field projections.$
前缀的字段投影。$
$
projection operator can only appear at the end of the field path; e.g. "field.$"
or "fieldA.fieldB.$"
.$
投影运算符只能出现在字段路径的末尾;例如"field.$"
或"fieldA.fieldB.$"
。db.inventory.find( { }, { "instock.$.qty": 1 } ) // Invalid starting in 4.4
$
projection operator.$
投影运算符后面的字段路径的组件。$
; i.e. the projection is treated as "instock.$"
.$
;的部分;即,投影被视为"instock.$"
。find()
and findAndModify()
projection cannot include a projection of an empty field name.find()
和findAndModify()
投影不能包含空字段名的投影。db.inventory.find( { }, { "": 0 } ) // Invalid starting in 4.4
inventory
with documents that contain a size
field:inventory
:{ ..., size: { h: 10, w: 15.25, uom: "cm" }, ... }
Path collision
error because it attempts to project both size
document and the size.uom
field:size
文档和size.uom
字段:db.inventory.find( {}, { size: 1, "size.uom": 1 } ) // Invalid starting in 4.4
{ "size.uom": 1, size: 1 }
produces the same result as the projection document { size: 1 }
.{ "size.uom": 1, size: 1 }
产生的结果与投影文档{ size: 1 }
相同。{ "size.uom": 1, size: 1, "size.h": 1 }
produces the same result as the projection document { "size.uom": 1, "size.h": 1 }
.{ "size.uom": 1, size: 1, "size.h": 1 }
产生的结果与投影文档{ "size.uom": 1, "size.h": 1 }
相同。$slice
of an Array and Embedded Fields$slice
数组和嵌入字段find()
and findAndModify()
projection cannot contain both a $slice
of an array and a field embedded in the array.find()
和findAndModify()
投影不能同时包含数组的$slice
和数组中嵌入的字段。inventory
that contains an array field instock
:instock
的集合inventory
:{ ..., instock: [ { warehouse: "A", qty: 35 }, { warehouse: "B", qty: 15 }, { warehouse: "C", qty: 35 } ], ... }
Path collision
error:db.inventory.find( {}, { "instock": { $slice: 1 }, "instock.warehouse": 0 } ) // Invalid starting in 4.4
$slice: 1
) in the instock
array but suppresses the warehouse
field in the projected element. instock
数组中的第一个元素($slice:1
),但不显示投影元素中的仓库字段。db.collection.aggregate()
method with two separate $project
stages.db.collection.aggregate()
方法与两个单独的$project
阶段一起使用。$
Positional Operator and $slice
Restriction$
位置运算符和$slice
限制find()
and findAndModify()
projection cannot include $slice
projection expression as part of a $
projection expression.find()
和findAndModify()
投影不能将$slice
投影表达式作为$
投影表达式的一部分。db.inventory.find( { "instock.qty": { $gt: 25 } }, { "instock.$": { $slice: 1 } } ) // Invalid starting in 4.4
instock.$
) in the instock
array that matches the query condition; i.e. the positional projection "instock.$"
takes precedence and the $slice:1
is a no-op. instock
数组中与查询条件匹配的第一个元素(instock.$
);即,位置投影"instock.$"
优先,而$slice:1
是no-op。"instock.$": {$slice: 1 }
does not exclude any other document field."instock.$": {$slice: 1 }
不排除任何其他文档字段。$external
用户名限制
To use Client Sessions and Causal Consistency Guarantees with 要对$external
authentication users (Kerberos, LDAP, or x.509 users), usernames cannot be greater than 10k bytes.$external
身份验证用户(Kerberos、LDAP或x.509用户)使用客户端会话和因果一致性保证,用户名不能大于10k字节。
Sessions that receive no read or write operations for 30 minutes orthat are not refreshed using 30分钟内未接收读写操作或未使用refreshSessions
within this threshold are marked as expired and can be closed by the MongoDB server at any time. refreshSessions
刷新的会话在此阈值内的会话将标记为过期,MongoDB服务器可以随时关闭。Closing a session kills any in-progress operations and open cursors associated with the session. 关闭会话将终止与会话关联的任何正在进行的操作和打开的游标。This includes cursors configured with 这包括使用noCursorTimeout()
or a maxTimeMS()
greater than 30 minutes.noCursorTimeout()
或大于30分钟的maxTimeMS()
配置的游标。
Consider an application that issues a 考虑一个发出db.collection.find()
. db.collection.find()
的应用程序。The server returns a cursor along with a batch of documents defined by the 服务器返回一个游标以及由cursor.batchSize()
of the find()
. find()
的cursor.batchSize()
定义的一批文档。The session refreshes each time the application requests a new batch of documents from the server. 每次应用程序从服务器请求一批新文档时,会话都会刷新。However, if the application takes longer than 30 minutes to process the current batch of documents, the session is marked as expired and closed. 但是,如果应用程序处理当前一批文档的时间超过30分钟,则会话将被标记为过期并关闭。When the application requests the next batch of documents, the server returns an error as the cursor was killed when the session was closed.当应用程序请求下一批文档时,服务器返回一个错误,因为会话关闭时游标被终止。
For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using 对于返回游标的操作,如果游标空闲时间可能超过30分钟,请使用Mongo.startSession()
and periodically refresh the session using the refreshSessions
command. Mongo.startSession()
在显式会话中发出操作,并使用refreshSessions
命令定期刷新会话。For example:例如:
var session = db.getMongo().startSession() var sessionId = session.getSessionId().id var cursor = session.getDatabase("examples").getCollection("data").find().noCursorTimeout() var refreshTimestamp = new Date() // take note of time at operation start while (cursor.hasNext()) { // Check if more than 5 minutes have passed since the last refresh if ( (new Date()-refreshTimestamp)/1000 > 300 ) { print("refreshing session") db.adminCommand({"refreshSessions" : [sessionId]}) refreshTimestamp = new Date() } // process cursor normally }
In the example operation, the 在示例操作中,db.collection.find()
method is associated with an explicit session. db.collection.find()
方法与显式会话相关联。The cursor is configured with 游标配置为noCursorTimeout()
to prevent the server from closing the cursor if idle. noCursorTimeout()
,以防止服务器在空闲时关闭游标。The while循环包括一个使用while
loop includes a block that uses refreshSessions
to refresh the session every 5 minutes. refreshSessions
每5分钟刷新一次会话的块。Since the session will never exceed the 30 minute idle timeout, the cursor can remain open indefinitely.由于会话永远不会超过30分钟的空闲超时,因此游标可以无限期地保持打开状态。
For MongoDB drivers, defer to the driver documentation for instructions and syntax for creating sessions.对于MongoDB驱动程序,有关创建会话的说明和语法,请参阅驱动程序文档。