Definition定义
aggregate
Performs aggregation operation using the aggregation pipeline. The pipeline allows users to process data from a collection or other source with a sequence of stage-based manipulations.使用聚合管道执行聚合操作。该管道允许用户通过一系列基于阶段的操作来处理来自集合或其他来源的数据。Tip
In在mongosh
, this command can also be run through thedb.aggregate()
anddb.collection.aggregate()
helper methods or with thewatch()
helper method.mongosh
中,此命令也可以通过db.aggregate()
和db.collection.aggregate()
辅助方法或watch()
辅助方法运行。Helper methods are convenient for助手方法对mongosh
users, but they may not return the same level of information as database commands.mongosh
用户来说很方便,但它们可能不会返回与数据库命令相同级别的信息。In cases where the convenience is not needed or the additional return fields are required, use the database command.如果不需要便利性或需要额外的返回字段,请使用数据库命令。
Compatibility兼容性
This command is available in deployments hosted in the following environments:此命令在以下环境中托管的部署中可用:
- MongoDB Atlas
: The fully managed service for MongoDB deployments in the cloud:云中MongoDB部署的完全托管服务
Important
This command has limited support in M0 and Flex clusters. For more information, see Unsupported Commands.此命令在M0和Flex集群中的支持有限。有关详细信息,请参阅不支持的命令。
- MongoDB
Enterprise企业版: The subscription-based, self-managed version of MongoDB:MongoDB的基于订阅的自我管理版本 - MongoDB
Community社区版: The source-available, free-to-use, and self-managed version of MongoDB:MongoDB的源代码可用、免费使用和自我管理版本
Syntax语法
Changed in version 5.0.在5.0版本中更改。
The command has the following syntax:该命令具有以下语法:
db.runCommand(
{
aggregate: "<collection>" || 1,
pipeline: [ <stage>, <...> ],
explain: <boolean>,
allowDiskUse: <boolean>,
cursor: <document>,
maxTimeMS: <int>,
bypassDocumentValidation: <boolean>,
readConcern: <document>,
collation: <document>,
hint: <string or document>,
comment: <any>,
writeConcern: <document>,
let: <document> // Added in MongoDB 5.0
}
)
Command Fields命令字段
The aggregate
command takes the following fields as arguments:aggregate
命令接受以下字段作为参数:
aggregate | string | 1 for collection agnostic commands.1 表示与集合无关的命令。 |
pipeline | array | |
explain | boolean |
|
| boolean | allowDiskUseByDefault for a specific query. You can use this option to either:allowDiskUseByDefault 。您可以使用此选项:
allowDiskUseByDefault is set to true and the server requires more than 100 megabytes of memory for a pipeline execution stage, MongoDB automatically writes temporary files to disk unless the query specifies { allowDiskUse: false } .allowDiskUseByDefault 设置为true ,并且服务器在管道执行阶段需要超过100兆字节的内存,MongoDB会自动将临时文件写入磁盘,除非查询指定{ allowDiskUse: false } 。allowDiskUseByDefault .allowDiskUseByDefault 。usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.usedDisk 指示符。 |
cursor | document | aggregate command with the cursor option unless the command includes the explain option.explain 选项,否则必须将aggregate 命令与cursor 选项一起使用。
|
maxTimeMS | non-negative integer | maxTimeMS , operations will not time out. A value of 0 explicitly specifies the default unbounded behavior.maxTimeMS 的值,操作将不会超时。值0 明确指定默认的无界行为。db.killOp() . MongoDB only terminates an operation at one of its designated interrupt points.db.killOp() 相同的机制终止超过分配时间限制的操作。MongoDB仅在其指定的中断点之一终止操作。 |
bypassDocumentValidation | boolean | $out or $merge aggregation stages.$out 或$merge 聚合阶段时才适用。aggregate to bypass schema validation during the operation. This lets you insert documents that do not meet the validation requirements.aggregate 在操作过程中绕过架构验证。这允许您插入不符合验证要求的文档。 |
readConcern | document | readConcern option has the following syntax: readConcern: { level: <value> } readConcern 选项具有以下语法:readConcern: { level: <value> }
The $out stage cannot be used in conjunction with read concern "linearizable" . If you specify "linearizable" read concern for db.collection.aggregate() , you cannot include the $out stage in the pipeline.
The |
collation | document | Optional.
Specifies the collation to use for the operation. Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks. The collation option has the following syntax:
When specifying collation, the If the collation is unspecified but the collection has a default collation (see If no collation is specified for the collection or for the operations, MongoDB uses the simple binary comparison used in prior versions for string comparisons. You cannot specify multiple collations for an operation. For example, you cannot specify different collations per field, or if performing a find with a sort, you cannot use one collation for the find and another for the sort. |
hint | string or document | Optional. The index to use for the aggregation. The index is on the initial collection/view against which the aggregation is run.
Specify the index either by the index name or by the index specification document. The |
comment | any | Optional. A user-provided comment to attach to this command. Once set, this comment appears alongside records of this command in the following locations:
A comment can be any valid BSON type (string, integer, object, array, etc). Any comment set on a |
writeConcern | document | Optional. A document that expresses the write concern to use with the $out or $merge stage.
Omit to use the default write concern with the |
let | document | Optional. Specifies a document with a list of variables. This allows you to improve command readability by separating the variables from the query text. The document syntax is:
The variable is set to the value returned by the expression, and cannot be changed afterwards. To access the value of a variable in the command, use the double dollar sign prefix ( To use a variable to filter results in a pipeline For a complete example using New in version 5.0. |
You must use the aggregate
command with the cursor
option unless the command includes the explain
option.
- To indicate a cursor with the default batch size, specify
cursor: {}
. To indicate a cursor with a non-default batch size, use
cursor: { batchSize: <num> }
.
For more information about the aggregation pipeline, see:
Sessions
For cursors created inside a session, you cannot call getMore
outside the session.
Similarly, for cursors created outside of a session, you cannot call getMore
inside a session.
Session Idle Timeout
MongoDB drivers and mongosh
associate all operations with a server session, with the exception of unacknowledged write operations. For operations not explicitly associated with a session (i.e. using Mongo.startSession()
), MongoDB drivers and mongosh
create an implicit session and associate it with the operation.
If a session is idle for longer than 30 minutes, the MongoDB server marks that session as expired and may close it at any time. When the MongoDB server closes the session, it also kills any in-progress operations and open cursors associated with the session. This includes cursors configured with noCursorTimeout()
or a maxTimeMS()
greater than 30 minutes.
For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Mongo.startSession()
and periodically refresh the session using the refreshSessions
command. See Session Idle Timeout for more information.
Transactions
aggregate
can be used inside distributed transactions.
However, the following stages are not allowed within transactions:
$collStats
$currentOp
$indexStats
$listLocalSessions
$listSessions
$merge
$out
$planCacheStats
$unionWith
You also cannot specify the explain
option.
- For cursors created outside of a transaction, you cannot call
getMore
inside the transaction. - For cursors created in a transaction, you cannot call
getMore
outside the transaction.
Important
In most cases, a distributed transaction incurs a greater performance cost over single document writes, and the availability of distributed transactions should not be a replacement for effective schema design. 在大多数情况下,分布式事务比单文档写入产生更大的性能成本,分布式事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. 对于许多场景,非规范化数据模型(嵌入式文档和数组)将继续是您的数据和用例的最佳选择。That is, for many scenarios, modeling your data appropriately will minimize the need for distributed transactions.也就是说,对于许多场景,适当地对数据进行建模将最大限度地减少对分布式事务的需求。
For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和oplog大小限制),另请参阅生产注意事项。
Client Disconnection
For 对于不包括aggregate
operation that do not include the $out
or $merge
stages:$out
或$merge
阶段的聚合操作:
If the client that issued 如果发出aggregate
disconnects before the operation completes, MongoDB marks aggregate
for termination using killOp
.aggregate
的客户端在操作完成之前断开连接,MongoDB将使用killOp
标记aggregate
终止。
Query Settings查询设置
New in version 8.0.
You can use query settings to set index hints, set operation rejection filters, and other fields. The settings apply to the query shape on the entire cluster. The cluster retains the settings after shutdown.
The query optimizer uses the query settings as an additional input during query planning, which affects the plan selected to run the query. You can also use query settings to block a query shape.
To add query settings and explore examples, see setQuerySettings
.
You can add query settings for find
, distinct
, and aggregate
commands.
Query settings have more functionality and are preferred over deprecated index filters.查询设置具有更多功能,并且优于已弃用的索引筛选器。
To remove query settings, use removeQuerySettings
. To obtain the query settings, use a $querySettings
stage in an aggregation pipeline.
Stable API
When using Stable API V1:
You cannot use the following stages in an
aggregate
command:- Don't include the
explain
field in anaggregate
command. If you do, the server returns an APIStrictError error. - When using the
$collStats
stage, you can only use thecount
field. No other$collStats
fields are available.
Example示例
You must use the 除非命令中包含aggregate
command with the cursor
option unless the command includes the explain
option.explain
选项,否则必须将aggregate
命令与cursor
选项一起使用。
To indicate a cursor with the default batch size, specify要使用默认批处理大小指示游标,请指定cursor: {}
.cursor: {}
。To indicate a cursor with a non-default batch size, use要指示具有非默认批大小的游标,请使用cursor: { batchSize: <num> }
.cursor: { batchSize: <num> }
。
Rather than run the aggregate
command directly, most users should use the db.collection.aggregate()
helper provided in mongosh
or the equivalent helper in their driver. In 2.6 and later, the db.collection.aggregate()
helper always returns a cursor.
Except for the first two examples which demonstrate the command syntax, the examples in this page use the db.collection.aggregate()
helper.
Aggregate Data with Multi-Stage Pipeline
A collection articles
contains documents such as the following:
{
_id: ObjectId("52769ea0f3dc6ead47c9a1b2"),
author: "abc123",
title: "zzz",
tags: [ "programming", "database", "mongodb" ]
}
The following example performs an aggregate
operation on the articles
collection to calculate the count of each distinct element in the tags
array that appears in the collection.
db.runCommand( {
aggregate: "articles",
pipeline: [
{ $project: { tags: 1 } },
{ $unwind: "$tags" },
{ $group: { _id: "$tags", count: { $sum : 1 } } }
],
cursor: { }
} )
In mongosh
, this operation can use the db.collection.aggregate()
helper as in the following:
db.articles.aggregate( [
{ $project: { tags: 1 } },
{ $unwind: "$tags" },
{ $group: { _id: "$tags", count: { $sum : 1 } } }
] )
Use $currentOp on an Admin Database
The following example runs a pipeline with two stages on the admin database. The first stage runs the $currentOp
operation and the second stage filters the results of that operation.
db.adminCommand( {
aggregate : 1,
pipeline : [ {
$currentOp : { allUsers : true, idleConnections : true } }, {
$match : { shard : "shard01" }
}
],
cursor : { }
} )
Note
The aggregate
command does not specify a collection and instead takes the form {aggregate: 1}
. This is because the initial $currentOp
stage does not draw input from a collection. It produces its own data that the rest of the pipeline uses.
The new db.aggregate()
helper has been added to assist in running collectionless aggregations such as this. The above aggregation could also be run like this example.
Return Information on the Aggregation Operation
The following aggregation operation sets the optional field explain
to true
to return information about the aggregation operation.
db.orders.aggregate([
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
],
{ explain: true }
)
Note
The explain output is subject to change between releases.解释输出可能会在不同版本之间发生变化。
Tip
db.collection.aggregate()
method方法
Interaction with allowDiskUseByDefault
与allowDiskUseByDefault
的交互
allowDiskUseByDefault
Starting in MongoDB 6.0, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. These temporary files last for the duration of the pipeline execution and can influence storage space on your instance. 从MongoDB 6.0开始,默认情况下,需要超过100兆字节内存来执行将临时文件写入磁盘的流水线阶段。这些临时文件在管道执行期间持续存在,并可能影响实例上的存储空间。In earlier versions of MongoDB, you must pass { allowDiskUse: true }
to individual find
and aggregate
commands to enable this behavior.
Individual find
and aggregate
commands can override the allowDiskUseByDefault
parameter by either:
- Using
{ allowDiskUse: true }
to allow writing temporary files out to disk whenallowDiskUseByDefault
is set tofalse
- Using
{ allowDiskUse: false }
to prohibit writing temporary files out to disk whenallowDiskUseByDefault
is set totrue
The profiler log messages and diagnostic log messages includes a usedDisk
indicator if any aggregation stage wrote data to temporary files due to memory restrictions.
Aggregate Data Specifying Batch Size
To specify an initial batch size, specify the batchSize
in the cursor
field, as in the following example:
db.orders.aggregate( [
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } },
{ $limit: 2 }
],
{ cursor: { batchSize: 0 } }
)
The { cursor: { batchSize: 0 } }
document, which specifies the size of the initial batch size, indicates an empty first batch. This batch size is useful for quickly returning a cursor or failure message without doing significant server-side work.
To specify batch size for subsequent getMore
operations (after the initial batch), use the batchSize
field when running the getMore
command.
Specify a Collation指定排序规则
Collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.
A collection myColl
has the following documents:myColl
集合有以下文件:
{ _id: 1, category: "café", status: "A" }
{ _id: 2, category: "cafe", status: "a" }
{ _id: 3, category: "cafE", status: "a" }
The following aggregation operation includes the Collation option:
db.myColl.aggregate(
[ { $match: { status: "A" } }, { $group: { _id: "$category", count: { $sum: 1 } } } ],
{ collation: { locale: "fr", strength: 1 } }
);
For descriptions on the collation fields, see Collation Document.
Hint an Index
Create a collection foodColl
with the following documents:
db.foodColl.insertMany( [
{ _id: 1, category: "cake", type: "chocolate", qty: 10 },
{ _id: 2, category: "cake", type: "ice cream", qty: 25 },
{ _id: 3, category: "pie", type: "boston cream", qty: 20 },
{ _id: 4, category: "pie", type: "blueberry", qty: 15 }
] )
Create the following indexes:
db.foodColl.createIndex( { qty: 1, type: 1 } );
db.foodColl.createIndex( { qty: 1, category: 1 } );
The following aggregation operation includes the hint
option to force the usage of the specified index:
db.foodColl.aggregate(
[ { $sort: { qty: 1 }}, { $match: { category: "cake", qty: 10 } }, { $sort: { type: -1 } } ],
{ hint: { qty: 1, category: 1 } }
)
Override Default Read Concern覆盖默认读取关注
To override the default read concern level, use the readConcern
option. The getMore
command uses the readConcern
level specified in the originating aggregate
command.
You cannot use the $out
or the $merge
stage in conjunction with read concern "linearizable"
. That is, if you specify "linearizable"
read concern for db.collection.aggregate()
, you cannot include either stages in the pipeline.
The following operation on a replica set specifies a read concern of "majority"
to read the most recent copy of the data confirmed as having been written to a majority of the nodes.
Important
- You can specify read concern level
"majority"
for an aggregation that includes an$out
stage. - Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of the data in the system.
db.restaurants.aggregate(
[ { $match: { rating: { $lt: 5 } } } ],
{ readConcern: { level: "majority" } }
)
To ensure that a single thread can read its own writes, use "majority"
read concern and "majority"
write concern against the primary of the replica set.
Use Variables in let
New in version 5.0.
To define variables that you can access elsewhere in the command, use the let
option.
Note
Create a collection cakeSales
containing sales for cake flavors:
db.cakeSales.insertMany( [
{ _id: 1, flavor: "chocolate", salesTotal: 1580 },
{ _id: 2, flavor: "strawberry", salesTotal: 4350 },
{ _id: 3, flavor: "cherry", salesTotal: 2150 }
] )
The following example:
retrieves the cake that has a检索salesTotal
greater than 3000, which is the cake with an_id
of 2salesTotal
大于3000的蛋糕,即_id
为2的蛋糕defines a在targetTotal
variable inlet
, which is referenced in$gt
as$$targetTotal
let
中定义一个targetTotal
变量,在$gt
中引用为$$targetTotal
db.runCommand( {
aggregate: db.cakeSales.getName(),
pipeline: [
{ $match: {
$expr: { $gt: [ "$salesTotal", "$$targetTotal" ] }
} },
],
cursor: {},
let: { targetTotal: 3000 }
} )