Docs HomeMongoDB Manual

db.collection.aggregate()

Definition定义

db.collection.aggregate(pipeline, options)
Important

mongosh Method

This page documents a mongosh method. This is not the documentation for database commands or language-specific drivers, such as Node.js.

For the database command, see the aggregate command.

For MongoDB API drivers, refer to the language-specific MongoDB driver documentation.

For the legacy mongo shell documentation, refer to the documentation for the corresponding MongoDB Server release:

mongo shell v4.4

Calculates aggregate values for the data in a collection or a view.计算集合或视图中数据的聚合值。

Parameter参数Type类型Description描述
pipelinearrayA sequence of data aggregation operations or stages. 一系列数据聚合操作或阶段。See the aggregation pipeline operators for details.有关详细信息,请参阅聚合管道运算符
The method can still accept the pipeline stages as separate arguments instead of as elements in an array; however, if you do not specify the pipeline as an array, you cannot specify the options parameter. 该方法仍然可以接受管道阶段作为单独的参数,而不是作为数组中的元素;但是,如果不将pipeline指定为数组,则无法指定options参数。
optionsdocumentOptional.可选的。Additional options that aggregate() passes to the aggregate command. Available only if you specify the pipeline as an array.aggregate()传递给aggregate命令的其他选项。仅当将管道指定为数组时才可用。

The options document can contain the following fields and values:options文档可以包含以下字段和值:

Changed in version 5.0.5.0版更改。

Field字段Type类型Description描述
explainbooleanOptional.可选的。Specifies to return the information on the processing of the pipeline. See Return Information on Aggregation Pipeline Operation for an example.指定返回有关管道处理的信息。有关示例,请参阅聚合管道操作的返回信息
Not available in multi-document transactions. 多文档事务记录中不可用。
allowDiskUsebooleanOptional.可选的。Enables writing to temporary files. 允许写入临时文件。When set to true, aggregation operations can write data to the _tmp subdirectory in the dbPath directory. 当设置为true时,聚合操作可以将数据写入dbPath目录中的_tmp子目录。See Interaction with allowDiskUseByDefault for an example.有关示例,请参阅allowDiskUseByDefault的交互
Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions. 从MongoDB 4.2开始,如果任何聚合阶段由于内存限制将数据写入临时文件,探查器日志消息诊断日志消息都会包含一个usedDisk指示符。
cursordocumentOptional.可选的。Specifies the initial batch size for the cursor. 指定游标的初始批处理大小。The value of the cursor field is a document with the field batchSize. cursor字段的值是一个字段为batchSize的文档。See Specify an Initial Batch Size for syntax and example.有关语法和示例,请参阅指定初始批量大小
maxTimeMSnon-negative integerOptional.可选的。Specifies a time limit in milliseconds for processing operations on a cursor. 指定处理游标上的操作的时间限制(以毫秒为单位)。If you do not specify a value for maxTimeMS, operations will not time out. 如果不指定maxTimeMS的值,操作将不会超时。A value of 0 explicitly specifies the default unbounded behavior.0显式指定默认的无边界行为。
MongoDB terminates operations that exceed their allotted time limit using the same mechanism as db.killOp(). MongoDB使用与db.killOp()相同的机制终止超过指定时间限制的操作。MongoDB only terminates an operation at one of its designated interrupt points. MongoDB只在其指定的中断点之一终止操作。
bypassDocumentValidationbooleanOptional.可选的。Applicable only if you specify the $out or $merge aggregation stages.仅在指定$out$merge聚合阶段时适用。
Enables db.collection.aggregate() to bypass document validation during the operation. This lets you insert documents that do not meet the validation requirements. 允许db.collection.aggregate()在操作过程中绕过文档验证。这样可以插入不符合验证要求的文档。
readConcerndocumentOptional.可选的。Specifies the read concern.指定读取关注
Starting in MongoDB 3.6, the readConcern option has the following syntax: 从MongoDB 3.6开始,readConcern选项具有以下语法:readConcern: { level: <value> }
Possible read concern levels are: 可能的读取关注级别为:
  • "local". This is the default read concern level for read operations against the primary and secondaries.。这是针对主和辅助的读取操作的默认读取关注级别。
  • "available". Available for read operations against the primary and secondaries. 。可用于对主和辅助进行读取操作。"available" behaves the same as "local" against the primary and non-sharded secondaries. 对于主和非分片的辅助,行为与"local"相同。The query returns the instance's most recent data.查询返回实例的最新数据。
  • "majority". Available for replica sets that use WiredTiger storage engine.。可用于使用WiredTiger存储引擎的复制副本集。
  • "linearizable". Available for read operations on the primary only.。仅可用于primary上的读取操作。
For more formation on the read concern levels, see Read Concern Levels.有关读取关注级别的更多信息,请参阅读取关注级别
Starting in MongoDB 4.2, the $out stage cannot be used in conjunction with read concern "linearizable". 从MongoDB 4.2开始,$out阶段不能与读取关注"linearizable"一起使用。That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $out stage in the pipeline.也就是说,如果为db.collection.aggregate()指定"linearizable"读取关注,则不能在管道中包含$out阶段。
The $merge stage cannot be used in conjunction with read concern "linearizable". $merge阶段不能与读取关注"linearizable"一起使用。That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include the $merge stage in the pipeline. 也就是说,如果为db.collection.aggregate()指定"linearizable"读取关注,则不能在管道中包含$merge阶段。
collationdocumentOptional.可选的。
Specifies the collation to use for the operation.指定要用于操作的排序规则
collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.允许用户为字符串比较指定特定于语言的规则,例如字母大小写和重音标记的规则。
The collation option has the following syntax: collation选项具有以下语法:
collation: {
locale: <string>,
caseLevel: <boolean>,
caseFirst: <string>,
strength: <int>,
numericOrdering: <boolean>,
alternate: <string>,
maxVariable: <string>,
backwards: <boolean>
}
When specifying collation, the locale field is mandatory; all other collation fields are optional. 指定排序规则时,locale字段是必需的;所有其他排序规则字段都是可选的。For descriptions of the fields, see Collation Document.有关字段的说明,请参阅排序规则文档
If the collation is unspecified but the collection has a default collation (see db.createCollection()), the operation uses the collation specified for the collection.如果未指定排序规则,但集合具有默认排序规则(请参见db.createCollection()),则操作将使用为集合指定的排序规则。
If no collation is specified for the collection or for the operations, MongoDB uses the simple binary comparison used in prior versions for string comparisons.如果没有为集合或操作指定排序规则,MongoDB将使用以前版本中使用的简单二进制比较进行字符串比较。
You cannot specify multiple collations for an operation. 不能为一个操作指定多个排序规则。For example, you cannot specify different collations per field, or if performing a find with a sort, you cannot use one collation for the find and another for the sort. 例如,不能为每个字段指定不同的排序规则,或者如果使用排序执行查找,则不能为查找使用一个排序规则,为排序使用另一个排序顺序。
hintstring or documentOptional.可选的。The index to use for the aggregation. The index is on the initial collection/view against which the aggregation is run.用于聚合的索引。索引位于运行聚合的初始集合/视图上。
Specify the index either by the index name or by the index specification document. 通过索引名称或索引规范文档指定索引。
Note
The hint does not apply to $lookup and $graphLookup stages. 提示不适用于$lookup$graphLookup阶段。
commentstringOptional.可选的。Users can specify an arbitrary string to help trace the operation through the database profiler, currentOp, and logs.用户可以指定任意字符串来帮助通过数据库探查器、currentOp和日志跟踪操作。
writeConcerndocumentOptional.可选的。A document that expresses the write concern to use with the $out or $merge stage.表示与$out$merge阶段一起使用的写入关注的文档。
Omit to use the default write concern with the $out or $merge stage. 省略对$out$merge阶段使用默认的写入关注。
letdocumentOptional.可选的。
Specifies a document with a list of variables. 指定具有变量列表的文档。This allows you to improve command readability by separating the variables from the query text.这允许您通过将变量与查询文本分离来提高命令的可读性。
The document syntax is: 文档语法为:
{ <variable_name_1>: <expression_1>,
...,
<variable_name_n>: <expression_n> }
The variable is set to the value returned by the expression, and cannot be changed afterwards.变量设置为表达式返回的值,之后不能更改。
To access the value of a variable in the command, use the double dollar sign prefix ($$) together with your variable name in the form $$<variable_name>. 要访问命令中变量的值,请使用双美元符号前缀($$)和形式为$$<variable_name>的变量名。For example: $$targetTotal. 例如:$$targetTotal
Note
To use a variable to filter results in a pipeline $match stage, you must access the variable within the $expr operator. 若要在管道$match阶段中使用变量筛选结果,必须访问$expr运算符中的变量。
For a complete example using let and variables, see Use Variables in let. 有关使用let和变量的完整示例,请参阅let中使用变量
New in version 5.0. 5.0版新增。
Returns:返回值:This method returns: 此方法返回:
  • A cursor for the documents produced by the final stage of the aggregation pipeline.由聚合管道的最后阶段生成的文档的游标
  • If the pipeline includes the explain option, the query returns a document that provides details on the processing of the aggregation operation.如果管道包含explain选项,则查询将返回一个文档,该文档提供有关聚合操作处理的详细信息。
  • If the pipeline includes the $out or $merge operators, the query returns an empty cursor.如果管道包含$out$merge运算符,则查询将返回一个空游标。

Behavior行为

Error Handling错误处理

If an error occurs, the aggregate() helper throws an exception.如果发生错误,aggregate()助手将抛出异常。

Cursor Behavior游标行为

In mongosh, if the cursor returned from the db.collection.aggregate() is not assigned to a variable using the var keyword, then mongosh automatically iterates the cursor up to 20 times. mongosh中,如果从db.collection.aggregate()返回的游标没有使用var键分配给变量,那么mongosh会自动迭代游标20次。See Iterate a Cursor in mongosh for handling cursors in mongosh.有关在mongosh中处理游标的信息,请参阅mongoh中迭代游标

Cursors returned from aggregation only supports cursor methods that operate on evaluated cursors (i.e. cursors whose first batch has been retrieved), such as the following methods:从聚合返回的游标只支持对已评估的游标(即已检索到第一批的游标)进行操作的游标方法,例如以下方法:

Tip

See also: 另请参阅:

For more information, see Aggregation Pipeline, Aggregation Reference, Aggregation Pipeline Limits, and aggregate.有关详细信息,请参阅聚合管道聚合参考聚合管道限制aggregate

Sessions会话

For cursors created inside a session, you cannot call getMore outside the session.对于在会话内创建的游标,不能在会话外调用getMore

Similarly, for cursors created outside of a session, you cannot call getMore inside a session.类似地,对于在会话之外创建的游标,不能在会话内部调用getMore

Session Idle Timeout会话空闲超时

MongoDB drivers and mongosh associate all operations with a server session, with the exception of unacknowledged write operations. MongoDB驱动程序和mongosh将所有操作与服务器会话相关联,但未确认的写入操作除外。For operations not explicitly associated with a session (i.e. using Mongo.startSession()), MongoDB drivers and mongosh create an implicit session and associate it with the operation.对于未显式关联会话的操作(即使用Mongo.startSession()),MongoDB驱动程序和mongosh会创建一个隐式会话并将其与操作关联。

If a session is idle for longer than 30 minutes, the MongoDB server marks that session as expired and may close it at any time. 如果会话空闲时间超过30分钟,MongoDB服务器会将该会话标记为已过期,并可能随时关闭该会话。When the MongoDB server closes the session, it also kills any in-progress operations and open cursors associated with the session. 当MongoDB服务器关闭会话时,它还会杀死任何正在进行的操作和打开与会话相关的游标。This includes cursors configured with noCursorTimeout() or a maxTimeMS() greater than 30 minutes.这包括配置为noCursorTimeout()maxTimeMS()大于30分钟的游标。

For operations that return a cursor, if the cursor may be idle for longer than 30 minutes, issue the operation within an explicit session using Mongo.startSession() and periodically refresh the session using the refreshSessions command. See Session Idle Timeout for more information.对于返回游标的操作,如果游标空闲时间可能超过30分钟,请使用Mongo.startSession()在显式会话中发出操作,并使用refreshSessions命令定期刷新会话。有关详细信息,请参阅会话空闲超时

Transactions事务

db.collection.aggregate() can be used inside multi-document transactions.可以在多文档事务中使用。

However, the following stages are not allowed within transactions:但是,事务中不允许出现以下阶段:

You also cannot specify the explain option.您也不能指定explain选项。

  • For cursors created outside of a transaction, you cannot call getMore inside the transaction.对于在事务外部创建的游标,不能在事务内部调用getMore
  • For cursors created in a transaction, you cannot call getMore outside the transaction.对于在事务中创建的游标,不能在事务外调用getMore
Important

In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. 在大多数情况下,与单文档写入相比,多文档事务会产生更高的性能成本,并且多文档事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. 对于许多场景,非规范化数据模型(嵌入文档和数组)将继续是您的数据和用例的最佳选择。That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.也就是说,对于许多场景,对数据进行适当建模将最大限度地减少对多文档事务的需求。

For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和操作日志大小限制),请参阅生产注意事项

Client Disconnection客户端断开连接

For db.collection.aggregate() operation that do not include the $out or $merge stages:对于不包括$out$merge阶段的db.collection.aggregate()操作:

Starting in MongoDB 4.2, if the client that issued db.collection.aggregate() disconnects before the operation completes, MongoDB marks db.collection.aggregate() for termination using killOp.从MongoDB 4.2开始,如果发出db.collection.aggregate()的客户端在操作完成前断开连接,MongoDB会使用killOp标记db.collection.aggregate()终止。

Examples实例

The following examples use the collection orders that contains the following documents:以下示例使用包含以下文档的orders集合:

{ _id: 1, cust_id: "abc1", ord_date: ISODate("2012-11-02T17:04:11.102Z"), status: "A", amount: 50 }
{ _id: 2, cust_id: "xyz1", ord_date: ISODate("2013-10-01T17:04:11.102Z"), status: "A", amount: 100 }
{ _id: 3, cust_id: "xyz1", ord_date: ISODate("2013-10-12T17:04:11.102Z"), status: "D", amount: 25 }
{ _id: 4, cust_id: "xyz1", ord_date: ISODate("2013-10-11T17:04:11.102Z"), status: "D", amount: 125 }
{ _id: 5, cust_id: "abc1", ord_date: ISODate("2013-11-12T17:04:11.102Z"), status: "A", amount: 25 }

Group by and Calculate a Sum分组并计算总和

The following aggregation operation selects documents with status equal to "A", groups the matching documents by the cust_id field and calculates the total for each cust_id field from the sum of the amount field, and sorts the results by the total field in descending order:以下聚合操作选择状态等于"A"的文档,按cust_id字段对匹配的文档进行分组,并根据amount字段的总和计算每个cust_id域的合计,然后按total字段降序对结果进行排序:

db.orders.aggregate([
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
])

The operation returns a cursor with the following documents:该操作返回一个带有以下文档的游标:

{ "_id" : "xyz1", "total" : 100 }
{ "_id" : "abc1", "total" : 75 }

mongosh iterates the returned cursor automatically to print the results. 自动迭代返回的游标以打印结果。See Iterate a Cursor in mongosh for handling cursors manually in mongosh.有关在mongosh中手动处理游标的信息,请参阅mongoh中迭代游标

Return Information on Aggregation Pipeline Operation返回聚合管道操作的信息

The following example uses db.collection.explain() to view detailed information regarding the execution plan of the aggregation pipeline.以下示例使用db.collection.explain()查看有关聚合管道执行计划的详细信息。

db.orders.explain().aggregate([
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } }
])

The operation returns a document that details the processing of the aggregation pipeline. 该操作返回一个文档,该文档详细说明了聚合管道的处理过程。For example, the document may show, among other details, which index, if any, the operation used. 例如,除了其他细节之外,文档还可以显示所使用的操作的索引(如果有的话)。[1] If the orders collection is a sharded collection, the document would also show the division of labor between the shards and the merge operation, and for targeted queries, the targeted shards.如果orders集合是一个分片集合,那么文档还将显示分片和合并操作之间的分工,对于目标查询,则显示目标分片。

Note

The intended readers of the explain output document are humans, and not machines, and the output format is subject to change between releases.解释输出文档的预期读者是人,而不是机器,并且输出格式在不同版本之间可能会发生变化。

You can view more verbose explain output by passing the executionStats or allPlansExecution explain modes to the db.collection.explain() method.通过将executionStatsallPlansExecution解释模式传递给db.collection.explain()方法,可以查看更详细的解释输出。

[1] Index Filters索引筛选器 can affect the choice of index used. 会影响所用索引的选择。See Index Filters for details.有关详细信息,请参阅索引筛选器

Interaction with allowDiskUseByDefaultallowDiskUseByDefault交互

Starting in MongoDB 6.0, pipeline stages that require more than 100 megabytes of memory to execute write temporary files to disk by default. 从MongoDB 6.0开始,需要超过100MB内存才能执行的管道阶段默认情况下会将临时文件写入磁盘。In earlier verisons of MongoDB, you must pass { allowDiskUse: true } to individual find and aggregate commands to enable this behavior.在MongoDB的早期版本中,必须将{ allowDiskUse: true }传递给单个findaggregate命令才能启用此行为。

Individual find and aggregate commands may override the allowDiskUseByDefault parameter by either:单独的findaggregate命令可以通过以下方式覆盖allowDiskUseByDefault参数:

  • Using { allowDiskUse: true } to allow writing temporary files out to disk when allowDiskUseByDefault is set to falseallowDiskUseByDefault设置为false时,使用{ allowDiskUse: true }允许将临时文件写入磁盘
  • Using { allowDiskUse: false } to prohibit writing temporary files out to disk when allowDiskUseByDefault is set to trueallowDiskUseByDefault设置为true时,使用{ allowDiskUse: false }禁止将临时文件写入磁盘

Starting in MongoDB 4.2, the profiler log messages and diagnostic log messages includes a usedDisk indicator if any aggregation stage wrote data to temporary files due to memory restrictions.从MongoDB 4.2开始,如果任何聚合阶段由于内存限制将数据写入临时文件,探查器日志消息诊断日志消息都会包含一个usedDisk指示符。

Tip

Specify an Initial Batch Size指定初始批量

To specify an initial batch size for the cursor, use the following syntax for the cursor option:要为游标指定初始批处理大小,请为cursor选项使用以下语法:

cursor: { batchSize: <int> }

For example, the following aggregation operation specifies the initial batch size of 0 for the cursor:例如,以下聚合操作为游标指定初始批处理大小0

db.orders.aggregate(
[
{ $match: { status: "A" } },
{ $group: { _id: "$cust_id", total: { $sum: "$amount" } } },
{ $sort: { total: -1 } },
{ $limit: 2 }
],
{
cursor: { batchSize: 0 }
}
)

The { cursor: { batchSize: 0 } } document, which specifies the size of the initial batch size, indicates an empty first batch. 指定初始批大小的{ cursor: { batchSize: 0 } }文档表示第一批为空。This batch size is useful for quickly returning a cursor or failure message without doing significant server-side work.这个批量大小对于快速返回游标或失败消息非常有用,而不需要做大量的服务器端工作。

To specify batch size for subsequent getMore operations (after the initial batch), use the batchSize field when running the getMore command.要为后续的getMore操作指定批处理大小(在初始批处理之后),请在运行getMore命令时使用batchSize字段。

mongosh iterates the returned cursor automatically to print the results. 自动迭代返回的游标以打印结果。See Iterate a Cursor in mongosh for handling cursors manually in mongosh.有关在mongosh中手动处理游标的信息,请参阅mongoh中迭代游标

Specify a Collation指定排序规则

collation allows users to specify language-specific rules for string comparison, such as rules for lettercase and accent marks.允许用户为字符串比较指定特定于语言的规则,例如字母大小写和重音标记的规则。

A collection myColl has the following documents:集合myColl包含以下文档:

{ _id: 1, category: "café", status: "A" }
{ _id: 2, category: "cafe", status: "a" }
{ _id: 3, category: "cafE", status: "a" }

The following aggregation operation includes the collation option:以下聚合操作包括collation选项:

db.myColl.aggregate(
[ { $match: { status: "A" } }, { $group: { _id: "$category", count: { $sum: 1 } } } ],
{ collation: { locale: "fr", strength: 1 } }
);
Note

If performing an aggregation that involves multiple views, such as with $lookup or $graphLookup, the views must have the same collation.如果执行涉及多个视图的聚合,例如使用$lookup$graphLookup,则这些视图必须具有相同的排序规则

For descriptions on the collation fields, see Collation Document.有关排序规则字段的说明,请参阅排序规则文档

Hint an Index提示索引

Create a collection foodColl with the following documents:使用以下文档创建一个集合foodColl

db.foodColl.insertMany( [
{ _id: 1, category: "cake", type: "chocolate", qty: 10 },
{ _id: 2, category: "cake", type: "ice cream", qty: 25 },
{ _id: 3, category: "pie", type: "boston cream", qty: 20 },
{ _id: 4, category: "pie", type: "blueberry", qty: 15 }
] )

Create the following indexes:创建以下索引:

db.foodColl.createIndex( { qty: 1, type: 1 } );
db.foodColl.createIndex( { qty: 1, category: 1 } );

The following aggregation operation includes the hint option to force the usage of the specified index:以下聚合操作包括用于强制使用指定索引的hint选项:

db.foodColl.aggregate(
[ { $sort: { qty: 1 }}, { $match: { category: "cake", qty: 10 } }, { $sort: { type: -1 } } ],
{ hint: { qty: 1, category: 1 } }
)

Override 覆盖readConcern

Use the readConcern option to specify the read concern for the operation.使用readConcern选项指定操作的读取关注。

You cannot use the $out or the $merge stage in conjunction with read concern "linearizable". 不能将$out$merge阶段与读取关注"linearizable"一起使用。That is, if you specify "linearizable" read concern for db.collection.aggregate(), you cannot include either stages in the pipeline.也就是说,如果为db.collection.aggregate()指定了"linearizable"读取关注,则不能在管道中包含这两个阶段。

The following operation on a replica set specifies a Read Concern of "majority" to read the most recent copy of the data confirmed as having been written to a majority of the nodes.以下对副本集的操作指定"majority"读取关注,以读取已确认写入大多数节点的数据的最新副本。

Note
  • To ensure that a single thread can read its own writes, use "majority" read concern and "majority" write concern against the primary of the replica set.要确保单个线程可以读取自己的写入,请对副本集的主线程使用"majority"读取关注和"majority"写入关注。
  • Starting in MongoDB 4.2, you can specify read concern level "majority" for an aggregation that includes an $out stage.从MongoDB 4.2开始,您可以为包含$out阶段的聚合指定读取关注级别"majority"
  • Regardless of the read concern level, the most recent data on a node may not reflect the most recent version of the data in the system.无论读取关注级别如何,节点上的最新数据都可能无法反映系统中数据的最新版本。
db.restaurants.aggregate(
[ { $match: { rating: { $lt: 5 } } } ],
{ readConcern: { level: "majority" } }
)

Specify a Comment指定注释

A collection named movies contains documents formatted as such:名为movies的集合包含如下格式的文档:

{
"_id" : ObjectId("599b3b54b8ffff5d1cd323d8"),
"title" : "Jaws",
"year" : 1975,
"imdb" : "tt0073195"
}

The following aggregation operation finds movies created in 1995 and includes the comment option to provide tracking information in the logs, the db.system.profile collection, and db.currentOp.以下聚合操作查找1995年创建的电影,并包括comment选项,用于在logsdb.system.profile集合和db.currentOp中提供跟踪信息。

db.movies.aggregate( [ { $match: { year : 1995 } } ], { comment : "match_all_movies_from_1995" } ).pretty()

On a system with profiling enabled, you can then query the system.profile collection to see all recent similar aggregations, as shown below:在启用了分析的系统上,您可以查询system.profile集合以查看所有最近的类似聚合,如下所示:

db.system.profile.find( { "command.aggregate": "movies", "command.comment" : "match_all_movies_from_1995" } ).sort( { ts : -1 } ).pretty()

This will return a set of profiler results in the following format:这将以以下格式返回一组探查器结果:

{
"op" : "command",
"ns" : "video.movies",
"command" : {
"aggregate" : "movies",
"pipeline" : [
{
"$match" : {
"year" : 1995
}
}
],
"comment" : "match_all_movies_from_1995",
"cursor" : {

},
"$db" : "video"
},
...
}

An application can encode any arbitrary information in the comment in order to more easily trace or identify specific operations through the system. 应用程序可以对注释中的任何任意信息进行编码,以便通过系统更容易地跟踪或识别特定操作。For instance, an application might attach a string comment incorporating its process ID, thread ID, client hostname, and the user who issued the command.例如,应用程序可能会附加一个字符串注释,其中包含进程ID、线程ID、客户端主机名和发出命令的用户。

Use Variables in letlet中使用变量

New in version 5.0. 5.0版新增。

To define variables that you can access elsewhere in the command, use the let option.要定义可以在命令的其他地方访问的变量,请使用let选项。

Note

To filter results using a variable in a pipeline $match stage, you must access the variable within the $expr operator.若要使用管道$match阶段中的变量筛选结果,必须访问$expr运算符中的变量。

Create a collection cakeSales containing sales for cake flavors:创建一个包含蛋糕口味销售的集合cakeSales

db.cakeSales.insertMany( [
{ _id: 1, flavor: "chocolate", salesTotal: 1580 },
{ _id: 2, flavor: "strawberry", salesTotal: 4350 },
{ _id: 3, flavor: "cherry", salesTotal: 2150 }
] )

The following example:以下示例:

  • retrieves the cake that has a salesTotal greater than 3000, which is the cake with an _id of 2检索salesTotal大于3000的蛋糕,即_id2的蛋糕
  • defines a targetTotal variable in let, which is referenced in $gt as $$targetTotallet中定义targetTotal变量,该变量在$gt中被引用为$$targetTotal
db.cakeSales.aggregate(
[
{ $match: {
$expr: { $gt: [ "$salesTotal", "$$targetTotal" ] }
} }
],
{ let: { targetTotal: 3000 } }
)