Database Manual / Reference / Database Commands / Diagnostics

serverStatus (database command数据库命令)

Definition定义

serverStatus
The serverStatus command returns a document that provides an overview of the database's state. Monitoring applications can run this command at a regular interval to collect statistics about the instance.serverStatus命令返回一个文档,提供数据库状态的概述。监视应用程序可以定期运行此命令以集合有关实例的统计信息。

Compatibility兼容性

This command is available in deployments hosted in the following environments:此命令在以下环境中托管的部署中可用:

  • MongoDB Atlas: The fully managed service for MongoDB deployments in the cloud:云中MongoDB部署的完全托管服务

Note

This command is supported in all MongoDB Atlas clusters. 所有MongoDB Atlas集群都支持此命令。For information on Atlas support for all commands, see Unsupported Commands.有关Atlas支持所有命令的信息,请参阅不支持的命令

  • MongoDB Enterprise: The subscription-based, self-managed version of MongoDB:MongoDB的基于订阅的自我管理版本
  • MongoDB Community: The source-available, free-to-use, and self-managed version of MongoDB:MongoDB的源代码可用、免费使用和自我管理版本

Syntax语法

The command has the following syntax:该命令具有以下语法:

db.runCommand(
{
serverStatus: 1
}
)

The value (i.e. 1 above) does not affect the operation of the command. The db.serverStatus() command returns a large amount of data. To return a specific object or field from the output append the object or field name to the command.值(即上面的1)不会影响命令的操作。db.serverStatus()命令返回大量数据。要从输出中返回特定对象或字段,请将对象或字段名称附加到命令中。

For example:例如:

db.runCommand({ serverStatus: 1}).metrics
db.runCommand({ serverStatus: 1}).metrics.commands
db.runCommand({ serverStatus: 1}).metrics.commands.update

mongosh provides the db.serverStatus() wrapper for the serverStatus command.serverStatus命令提供db.serverStatus()包装器。

Tip

Much of the output of serverStatus is also displayed dynamically by mongostat. See the mongostat command for more information.serverStatus的大部分输出也由mongostat动态显示。有关详细信息,请参阅mongostat命令。

Behavior行为

By default, serverStatus excludes in its output:默认情况下,serverStatus在其输出中排除:

To include fields that are excluded by default, specify the top-level field and set it to 1 in the command. To exclude fields that are included by default, specify the field and set to 0. You can specify either top-level or embedded fields.要包含默认情况下排除的字段,请指定顶级字段并在命令中将其设置为1。要排除默认包含的字段,请指定该字段并将其设置为0。您可以指定顶级字段或嵌入字段。

For example, the following operation excludes the repl, metrics and locks information in the output.例如,以下操作排除了输出中的replmetricslocks信息。

db.runCommand( { serverStatus: 1, repl: 0, metrics: 0, locks: 0 } )

For example, the following operation excludes the embedded histogram field in the output.例如,以下操作排除了输出中嵌入的histogram字段。

db.runCommand( { serverStatus: 1, metrics: { query: { multiPlanner: { histograms: false } } } } )

The following example includes all repl information in the output:以下示例包括输出中的所有repl信息:

db.runCommand( { serverStatus: 1,  repl: 1 } )

Initialization初始化

The statistics reported by serverStatus are reset when the mongod server is restarted.mongod服务器重新启动时,serverStatus报告的统计信息将被重置。

This command will always return a value, even on a fresh database. The related command db.serverStatus() does not always return a value unless a counter has started to increment for a particular metric.即使在新数据库上,此命令也将始终返回一个值。相关的命令db.serverStatus()并不总是返回值,除非计数器已经开始为特定指标递增。

After you run an update query, db.serverStatus() and db.runCommand({ serverStatus: 1}) both return the same values.运行更新查询后,db.serverStatus()db.runCommand({ serverStatus: 1})都返回相同的值。

{
arrayFilters : Long("0"),
failed : Long("0"),
pipeline : Long("0"),
total : Long("1")
}

Include 包含mirroredReads

By default, the mirroredReads information is not included in the output. To return mirroredReads information, you must explicitly specify the inclusion:默认情况下,输出中不包括mirroredReads信息。要返回mirroredReads信息,您必须明确指定包含:

db.runCommand( { serverStatus: 1, mirroredReads: 1 } )

Output输出

Note

The output fields vary depending on the version of MongoDB, underlying operating system platform, the storage engine, and the kind of node, including mongos, mongod or replica set member.输出字段因MongoDB的版本、底层操作系统平台、存储引擎和节点类型而异,包括mongosmongod副本集成员。

For the serverStatus output specific to the version of your MongoDB, refer to the appropriate version of the MongoDB Manual.有关特定于MongoDB版本的serverStatus输出,请参阅相应版本的MongoDB手册。

asserts

asserts: {
regular: <num>,
warning: <num>,
msg: <num>,
user: <num>,
rollovers: <num>
},
asserts
A document that reports on the number of assertions raised since the MongoDB process started. Assertions are internal checks for errors that occur while the database is operating and can help diagnose issues with the MongoDB server. Non-zero asserts values indicate assertion errors, which are uncommon and not an immediate cause for concern. Errors that generate asserts can be recorded in the log file or returned directly to a client application for more information.一份报告自MongoDB进程启动以来提出的断言数量的文档。断言是对数据库运行时发生的错误的内部检查,可以帮助诊断MongoDB服务器的问题。非零断言值表示断言错误,这种错误并不常见,也不是立即引起关注的原因。生成断言的错误可以记录在日志文件中,也可以直接返回给客户端应用程序以获取更多信息。
asserts.regular
The number of regular assertions raised since the MongoDB process started. Examine the MongoDB log for more information.自MongoDB进程启动以来提出的常规断言数量。检查MongoDB日志以获取更多信息。
asserts.warning
This field always returns zero 0.此字段始终返回0
asserts.msg
The number of message assertions raised since the MongoDB process started. Examine the log file for more information about these messages.自MongoDB进程启动以来引发的消息断言数。有关这些消息的详细信息,请检查日志文件。
asserts.user
The number of "user asserts" that have occurred since the last time the MongoDB process started. These are errors that user may generate, such as out of disk space or duplicate key. You can prevent these assertions by fixing a problem with your application or deployment. 自上次启动MongoDB进程以来发生的“用户断言”数量。这些是用户可能产生的错误,例如磁盘空间不足或键重复。您可以通过修复应用程序或部署的问题来防止这些断言。Server logs may have limited information about "user asserts." To learn more information about the source of "user asserts," check the application logs for application errors.服务器日志中有关“用户断言”的信息可能有限。要了解有关“用户声明”来源的更多信息,请检查应用程序日志中的应用程序错误。
asserts.rollovers
The number of times that the assert counters have rolled over since the last time the MongoDB process started. The counters will roll over to zero after 2 30 assertions. 自上次启动MongoDB进程以来,断言计数器翻转的次数。在230次断言后,计数器将翻转为零。Use this value to provide context to the other values in the asserts data structure.使用此值为asserts数据结构中的其他值提供上下文。

bucketCatalog

bucketCatalog: {
numBuckets: <num>,
numOpenBuckets: <num>,
numIdleBuckets: <num>,
memoryUsage: <num>,
numBucketInserts: <num>,
numBucketUpdates: <num>,
numBucketsOpenedDueToMetadata: <num>,
numBucketsClosedDueToCount: <num>,
numBucketsClosedDueToSchemaChange: <num>,
numBucketsClosedDueToSize: <num>,
numBucketsClosedDueToTimeForward: <num>,
numBucketsClosedDueToTimeBackward: <num>,
numBucketsClosedDueToMemoryThreshold: <num>,
numCommits: <num>,
numMeasurementsGroupCommitted: <num>,
numWaits: <num>,
numMeasurementsCommitted: <num>,
avgNumMeasurementsPerCommit: <num>,
numBucketsClosedDueToReopening: <num>,
numBucketsArchivedDueToMemoryThreshold: <num>,
numBucketsArchivedDueToTimeBackward: <num>,
numBucketsReopened: <num>,
numBucketsKeptOpenDueToLargeMeasurements: <num>,
numBucketsClosedDueToCachePressure: <num>,
numBucketsFrozen: <num>,
numCompressedBucketsConvertedToUnsorted: <num>,
numBucketsFetched: <num>,
numBucketsQueried: <num>,
numBucketFetchesFailed: <num>,
numBucketQueriesFailed: <num>,
numBucketReopeningsFailed: <num>,
numDuplicateBucketsReopened: <num>,
stateManagement: {
bucketsManaged: <num>,
currentEra: <num>,
erasWithRemainingBuckets: <num>,
trackedClearOperations: <num>
}
}

New in version 5.0.在版本5.0中新增。

A document that reports metrics related to the internal storage of time series collections.报告与时间序列集合的内部存储相关的指标的文档。

The bucketCatalog returns the following metrics:bucketCatalog返回以下指标:

MetricDescription描述
numBucketsThe total number of tracked buckets. Expected to be equal to the sum of numOpenBuckets and numArchivedBuckets.被跟踪的桶的总数。应等于numOpenBucketsnumArchivedBuckets的总和。
numOpenBucketsThe number of tracked buckets with a full representation stored in-cache, ready to receive new documents.缓存中存储有完整表示的被跟踪桶的数量,准备接收新文档。
numIdleBucketsThe number of buckets that are open and currently without an uncommitted document insertion pending. A subset of numOpenBuckets.已打开且当前没有未提交的文档插入挂起的桶数。numOpenBuckets的一个子集。
numArchivedBucketsThe number of tracked buckets with a minimal representation stored in-cache that can be efficiently reopened to receive new documents.缓存中存储的具有最小表示的被跟踪桶的数量,这些桶可以有效地重新打开以接收新文档。
memoryUsageThe number of bytes used by internal bucketing data structures.内部bucketing数据结构使用的字节数。
numBucketInsertsThe number of new buckets created.创建的新桶数。
numBucketUpdatesThe number times an existing bucket was updated to include additional documents.更新现有桶以包含其他文档的次数。
numBucketsOpenedDueToMetadataThe number of buckets opened because a document arrived with a metaField value that didn't match any currently open buckets.由于文档到达时的metaField值与当前打开的任何桶都不匹配而打开的桶数。
numBucketsClosedDueToCountThe number of buckets closed due to reaching their document count limit.由于达到文档计数限制而关闭的桶数。
numBucketsClosedDueToSchemaChangeThe number of buckets closed because the schema of an incoming document was incompatible with that of the documents in the open bucket.由于传入文档的架构与打开的桶中的文档架构不兼容而关闭的桶数量。
numBucketsClosedDueToSizeThe number of buckets closed because an incoming document would make the bucket exceed its size limit.由于传入文档会使桶超过其大小限制而关闭的桶数。
numBucketsClosedDueToTimeForwardThe number of buckets closed because a document arrived with a timeField value after the maximum time of all currently open buckets for that metaField value.由于文档到达时的timeField值超过了该metaField值的所有当前打开的桶的最大时间,因此关闭的桶数。
numBucketsClosedDueToTimeBackwardThe number of buckets closed because a document arrived with a timeField value before the minimum time of all currently open buckets for that metaField value.由于文档到达时的timeField值早于该metaField值的所有当前打开的桶的最小时间,因此关闭的桶数。
numBucketsClosedDueToMemoryThresholdThe number of buckets closed because the set of active buckets didn't fit within the allowed bucket catalog cache size.由于活动桶集不符合允许的桶目录缓存大小而关闭的桶数。
numCommitsThe number of bucket-level commits to the time series collection.提交到时间序列集合的桶级别数量。
numMeasurementsGroupCommittedThe number of commits that included measurements from concurrent insert commands.包含来自并发插入命令的测量值的提交数。
numWaitsThe number of times an operation waited on another thread to either reopen a bucket or finish a group commit.一个操作在另一个线程上等待重新打开桶或完成组提交的次数。
numMeasurementsCommittedThe number of documents committed to the time series collection.提交到时间序列集合的文档数量。
avgNumMeasurementsPerCommitThe average number of documents per commit.每次提交的平均文档数。
numBucketsClosedDueToReopeningThe number of buckets closed because a suitable bucket was re-opened instead.由于重新打开合适的桶而关闭的桶数量。
numBucketsArchivedDueToMemoryThresholdThe number of buckets archived because the set of active buckets didn't fit within the allowed bucket catalog cache size.由于活动桶集不符合允许的桶目录缓存大小而存档的桶数。
numBucketsArchivedDueToTimeBackwardThe number of buckets archived because a document arrived with a timeField value before the minimum time of all currently open buckets for that metaField value.由于文档到达时的timeField值早于该metaField值当前打开的所有桶的最小时间,因此存档的桶数。
numBucketsReopenedThe number of buckets re-opened because a document arrived that didn't match any open buckets, but did match an existing non-full bucket.重新打开的桶数,因为到达的文档与任何打开的桶都不匹配,但与现有的非满桶匹配。
numBucketsKeptOpenDueToLargeMeasurementsThe number of buckets that would have been closed due size, but were kept open due to not containing a minimum number of documents required to achieve reasonable compression.由于大小而关闭,但由于不包含实现合理压缩所需的最小数量的文档而保持打开的桶的数量。
numBucketsClosedDueToCachePressureThe number of buckets closed because their size exceeds the bucket catalog's dynamic bucket size limit derived from available storage engine cache size and numBuckets. This limit is distinct from the maximum bucket size limit.由于桶的大小超过了从可用存储引擎缓存大小和numBuckets导出的桶目录的动态桶大小限制而关闭的桶数。此限制不同于最大桶尺寸限制。
numBucketsFrozenThe number of frozen buckets. Buckets are frozen if attempting to compress the bucket would corrupt its contents.冷冻桶的数量。如果试图压缩桶会损坏其内容物,则桶会冻结。
numCompressedBucketsConvertedToUnsortedThe number of compressed buckets that contain documents not sorted by their respective timeField values.包含未按其各自的timeField值排序的文档的压缩桶的数量。
numBucketsFetchedThe number of archived buckets fetched to check if they were suitable for re-opening.取回的存档桶的数量,以检查它们是否适合重新打开。
numBucketsQueriedThe total number of buckets queried to see if they could hold an incoming document.查询的桶总数,以查看它们是否可以容纳传入文档。
numBucketFetchesFailedThe number of archived buckets fetched that were not suitable for re-opening.提取的不适合重新打开的已存档桶的数量。
numBucketQueriesFailedThe number of queries for a suitable open bucket that failed due to lack of candidate availability.由于缺少候选可用性而失败的对合适的开放桶的查询数。
numBucketReopeningsFailedThe number of attempted bucket reopenings that failed due to reasons including conflicts with concurrent operations, malformed buckets, and more.由于与并发操作冲突、桶格式错误等原因而失败的尝试重新打开桶的次数。
numDuplicateBucketsReopenedThe number of re-opened buckets that are duplicates of currently open buckets.与当前打开的桶重复的重新打开的桶的数量。
stateManagementA document that tracks bucket catalog state information.跟踪桶目录状态信息的文档。
stateManagement.bucketsManagedThe total number of buckets that are being tracked for conflict management. This includes open buckets in the bucket catalog as well as any buckets that are being directly written to, including by update and delete commands.正在跟踪以进行冲突管理的桶的总数。这包括桶目录中的打开桶以及直接写入的任何桶,包括通过更新和删除命令写入的桶。
stateManagement.currentEraThe current era of the bucket catalog. The bucket catalog starts at era 0 and increments when a bucket is cleared. Attempting to insert into a bucket will either cause it to be removed if it was cleared, or update it to the current era.桶目录的当前时代。桶目录从era 0开始,当一个桶被清除时递增。尝试插入到桶中会导致它被清除后被删除,或者将其更新到当前时代。
stateManagement.erasWithRemainingBucketsThe number of eras with tracked buckets.带有跟踪桶的时代数量。
stateManagement.trackedClearOperationsThe number of times the a set of buckets has been cleared, but the removal of those buckets was deferred. This can happen due to events such as dropping a collection, moving a chunk in a sharded collection, or an election.一组水桶被清理的次数,但这些水桶的移除被推迟了。这可能是由于删除集合、在分片集合中移动块或选举等事件造成的。

You can also use the $collStats aggregation pipeline stage to find time series metrics. 您还可以使用$collStats聚合管道阶段来查找时间序列指标。To learn more, see storageStats Output on Time Series Collections.要了解更多信息,请参阅时间序列集合上的storageStats输出

catalogStats

New in version 5.1.在版本5.1中新增。

catalogStats: {
collections: <num>,
capped: <num>,
views: <num>,
timeseries: <num>,
internalCollections: <num>,
internalViews: <num>,
systemProfile: <num>
}
catalogStats
A document that reports statistics on collection usage via collection counts.通过集合计数报告集合使用统计数据的文档。
catalogStats.collections
The total number of user collections (not including system collections).用户集合的总数(不包括系统集合)。
catalogStats.capped
The total number of capped user collections.上限用户集合的总数。
catalogStats.views
The total number of user views.用户视图的总数。
catalogStats.timeseries
The total number of time series collections.时间序列集合的总数。
catalogStats.internalCollections
The total number of system collections (collections on the config, admin, or local databases).系统集合的总数(configadminlocal数据库上的集合)。
catalogStats.internalViews
The total number of views of system collections (collections on the config, admin, or local databases).系统集合(configadminlocal数据库上的集合)的视图总数。
catalogStats.systemProfile
The total number of profile collections on all databases.所有数据库上的profile集合总数。

changeStreamPreImages

New in version 5.0.在版本5.0中新增。

changeStreamPreImages : {
purgingJob : {
totalPass : <num>,
docsDeleted : <num>,
bytesDeleted : <num>,
scannedCollections : <num>,
scannedInternalCollections : <num>,
maxTimestampEligibleForTruncate : <timestamp>,
maxStartWallTimeMillis : <num>,
timeElapsedMillis : <num>,
},
expireAfterSeconds : <num>
}

A document that reports metrics related to change stream pre-images.报告与更改流前映像相关的指标的文档。

changeStreamPreImages.purgingJob

New in version 7.1.在版本7.1中新增。

A document that reports metrics related to the purging jobs for change stream pre-images. Purging jobs are background processes that the system uses to remove pre-images asynchronously.报告与更改流预映像的清除作业相关的指标的文档。清除作业是系统用于异步删除预映像的后台进程。

The changeStreamPreImages.purgingJob field returns the following metrics:changeStreamPreImagespurgeingJob字段返回以下指标:

Metric指标Description描述
totalPassTotal number of deletion passes completed by the purging job.清除作业完成的删除过程总数。
docsDeletedCumulative number of pre-image documents deleted by the purging job.清除作业删除的预映像文档的累积数量。
bytesDeletedCumulative size in bytes of all deleted documents from all pre-image collections by the purging job.清除作业从所有预映像集合中删除的所有文档的累积大小(以字节为单位)。
scannedCollections

Cumulative number of pre-image collections scanned by the purging job.清除作业扫描的预图像集合的累积数量。

In single-tenant environments, this number is the same as totalPass since each tenant has one pre-image collection.

scannedInternalCollectionsCumulative number of internal pre-image collections scanned by the purging job. Internal collections are the collections within the pre-image collections stored in config.system.preimages.
maxTimestampEligibleForTruncate

Most recent timestamp up to which old pre-images can be truncated to reduce storage space. Pre-images older than maxTimestampEligibleForTruncate can be truncated.

New in version 8.1.在版本8.1中新增。

maxStartWallTimeMillisMaximum wall time in milliseconds from the first document of each pre-image collection.从每个预图像集合的第一个文档开始的最长墙时间(毫秒)。
timeElapsedMillisCumulative time in milliseconds of all deletion passes by the purging job.清除作业经过的所有删除的累积时间(毫秒)。
changeStreamPreImages.expireAfterSeconds

New in version 7.1.在版本7.1中新增。

Amount of time in seconds that MongoDB retains pre-images. MongoDB保留预映像的时间量(秒)。If expireAfterSeconds is not defined, this metric does not appear in the serverStatus output.如果未定义expireAfterSeconds,则此指标不会出现在serverStatus输出中。

connections

connections : {
current : <num>,
available : <num>,
totalCreated : <num>,
rejected : <num>, // Added in MongoDB 6.3
active : <num>,
threaded : <num>,
exhaustIsMaster : <num>,
exhaustHello : <num>,
awaitingTopologyChanges : <num>,
loadBalanced : <num>,
queuedForEstablishment : <num>, // Added in MongoDB 8.2 *(also available in 8.1.1, 8.0.12, and 7.0.23)*
establishmentRateLimit : { // Added in MongoDB 8.2 *(also available in 8.1.1, 8.0.12, and 7.0.23)*
rejected: <num>,
exempted: <num>,
interruptedDueToClientDisconnect: <num>
}
}
connections
A document that reports on the status of the connections. Use these values to assess the current load and capacity requirements of the server.报告连接状态的文档。使用这些值来评估服务器的当前负载和容量要求。
connections.current

The number of incoming connections from clients to the database server. This number includes the current shell session. Consider the value of connections.available to add more context to this datum.从客户端到数据库服务器的传入连接数。此数字包括当前shell会话。考虑connections.available的值,为该数据添加更多上下文。

The value will include all incoming connections including any shell connections or connections from other servers, such as replica set members or mongos instances.该值将包括所有传入连接,包括任何shell连接或来自其他服务器的连接,如副本集成员或mongos实例。

connections.available
The number of unused incoming connections available. Consider this value in combination with the value of connections.current to understand the connection load on the database, and the UNIX ulimit Settings for Self-Managed Deployments document for more information about system thresholds on available connections.可用的未使用的传入连接数。将此值与connections.current的值结合考虑,以了解数据库上的连接负载,有关可用连接的系统阈值的更多信息,请参阅自我管理的部署的UNIX ulimit设置文档。
connections.totalCreated
Count of all incoming connections created to the server. This number includes connections that have since closed.创建到服务器的所有传入连接的计数。此数字包括此后关闭的连接。
connections.rejected

New in version 6.3.在版本6.3中新增。

The number of incoming connections the server rejected because the server doesn't have the capacity to accept additional connections or the net.maxIncomingConnections setting is reached.

connections.queuedForEstablishment

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections currently queued and waiting for establishment. This metric is relevant when connection establishment rate limiting is enabled using the ingressConnectionEstablishmentRateLimiterEnabled parameter.

connections.establishmentRateLimit

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

A document that contains metrics related to the ingress connection establishment rate limiter. These metrics provide insights into how the rate limiter handles connection requests when ingressConnectionEstablishmentRateLimiterEnabled is set to true. For more information on rate limiting, see Configure the Ingress Connection Establishment Rate Limiter.

connections.establishmentRateLimit.rejected

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections the server rejects due to connection establishment rate limiting. This metric shows how many connection attempts the server rejected rejected because they exceeded the rate limits set by the ingress connection establishment rate limiter parameters.

connections.establishmentRateLimit.exempted

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections that bypassed the rate limiter because they originated from IP addresses or CIDR ranges specified in the ingressConnectionEstablishmentRateLimiterBypass parameter. The server does note rate limit these connections and establishes them immediately, regardless of current queue size or rate limits.

connections.establishmentRateLimit.interruptedDueToClientDisconnect

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections that were interrupted while waiting in the establishment queue because the client disconnected before establishment could complete. 在建立队列中等待时因客户端在建立完成之前断开连接而中断的传入连接数。A high value for this metric sometimes indicates that the client's connectTimeoutMS setting is too short relative to the queue wait time, which is affected by ingressConnectionEstablishmentMaxQueueDepth and ingressConnectionEstablishmentRatePerSec. If this value is high, consider adjusting these parameters using the following formula: maxQueueDepth < (establishmentRatePerSec / 1000) * (connectTimeoutMs - avgEstablishmentTimeMs).

connections.active
The number of active client connections to the server. Active client connections refers to client connections that currently have operations in progress.与服务器的活动客户端连接数。活动客户端连接是指当前正在进行操作的客户端连接。
connections.threaded

The number of incoming connections from clients that are assigned to threads that service client requests.分配给为客户端请求提供服务的线程的来自客户端的传入连接数。

New in version 5.0.在版本5.0中新增。

connections.exhaustIsMaster

The number of connections whose last request was an isMaster request with exhaustAllowed.

Note

If you are running MongoDB 5.0 or later, do not use the isMaster command. Instead, use hello.如果您运行的是MongoDB 5.0或更高版本,请不要使用isMaster命令。相反,使用hello

connections.exhaustHello

The number of connections whose last request was a hello request with exhaustAllowed.最后一个请求是带有exhaustAllowedhello请求的连接数。

New in version 5.0.在版本5.0中新增。

connections.awaitingTopologyChanges

The number of clients currently waiting in a hello or isMaster request for a topology change.当前在helloisMaster请求中等待拓扑更改的客户端数量。

Note

If you are running MongoDB 5.0 or later, do not use the isMaster command. Instead, use hello.

connections.loadBalanced

New in version 5.3.在版本5.3中新增。

The current number of incoming connections received through the load balancer.通过负载平衡器接收到的当前传入连接数。

defaultRWConcern

The defaultRWConcern section provides information on the local copy of the global default read or write concern settings. defaultRWConcern部分提供有关全局默认读或写入关注设置的本地副本的信息。The data may be stale or out of date. See getDefaultRWConcern for more information.数据可能已经过时。有关更多信息,请参阅getDefaultRWConcern

defaultRWConcern : {
defaultReadConcern : {
level : <string>
},
defaultWriteConcern : {
w : <string> | <int>,
wtimeout : <int>,
j : <bool>
},
defaultWriteConcernSource: <string>,
defaultReadConcernSource: <string>,
updateOpTime : Timestamp,
updateWallClockTime : Date,
localUpdateWallClockTime : Date
}
defaultRWConcern
The last known global default read or write concern settings.最后一个已知的全局默认读或写入关注设置。
defaultRWConcern.defaultReadConcern

The last known global default read concern setting.

If serverStatus does not return this field, the global default read concern has either not been set or has not yet propagated to the instance.

defaultRWConcern.defaultReadConcern.level

The last known global default read concern level setting.

If serverStatus does not return this field, the global default for this setting has either not been set or has not yet propagated to the instance.

defaultRWConcern.defaultWriteConcern

The last known global default write concern setting.最后一个已知的全局默认写入关注设置。

If serverStatus does not return this field, the global default write concern has either not been set or has not yet propagated to the instance.如果serverStatus未返回此字段,则表示全局默认写入关注尚未设置或尚未传播到实例。

defaultRWConcern.defaultWriteConcern.w

The last known global default w setting.最后一个已知的全局默认w设置。

If serverStatus does not return this field, the global default for this setting has either not been set or has not yet propagated to the instance.如果serverStatus未返回此字段,则表示此设置的全局默认值尚未设置或尚未传播到实例。

defaultRWConcern.defaultWriteConcern.wtimeout

The last known global default wtimeout setting.最后一个已知的全局默认wtimeout设置。

If serverStatus does not return this field, the global default for this setting has either not been set or has not yet propagated to the instance.

defaultRWConcern.defaultWriteConcernSource

The source of the default write concern. By default, the value is "implicit". Once you set the default write concern with setDefaultRWConcern, the value becomes "global".

New in version 5.0.在版本5.0中新增。

defaultRWConcern.defaultReadConcernSource

The source of the default read concern. By default, the value is "implicit". Once you set the default read concern with setDefaultRWConcern, the value becomes "global".

New in version 5.0.在版本5.0中新增。

defaultRWConcern.updateOpTime
The timestamp when the instance last updated its copy of any global read or write concern settings. If the defaultRWConcern.defaultReadConcern and defaultRWConcern.defaultWriteConcern fields are absent, this field indicates the timestamp when the defaults were last unset.
defaultRWConcern.updateWallClockTime
The wall clock time when the instance last updated its copy of any global read or write concern settings. If the defaultRWConcern.defaultReadConcern and defaultRWConcern.defaultWriteConcern fields are absent, this field indicates the time when the defaults were last unset.
defaultRWConcern.localUpdateWallClockTime
The local system wall clock time when the instance last updated its copy of any global read or write concern setting. If this field is the only field under defaultRWConcern, the instance has never had knowledge of a global default read or write concern setting.

electionMetrics

The electionMetrics section provides information on elections called by this mongod instance in a bid to become the primary:

electionMetrics : {
stepUpCmd : {
called : Long("<num>"),
successful : Long("<num>")
},
priorityTakeover : {
called : Long("<num>"),
successful : Long("<num>")
},
catchUpTakeover : {
called : Long("<num>"),
successful : Long("<num>")
},
electionTimeout : {
called : Long("<num>"),
successful : Long("<num>")
},
freezeTimeout : {
called : Long("<num>"),
successful : Long("<num>")
},
numStepDownsCausedByHigherTerm : Long("<num>"),
numCatchUps : Long("<num>"),
numCatchUpsSucceeded : Long("<num>"),
numCatchUpsAlreadyCaughtUp : Long("<num>"),
numCatchUpsSkipped : Long("<num>"),
numCatchUpsTimedOut : Long("<num>"),
numCatchUpsFailedWithError : Long("<num>"),
numCatchUpsFailedWithNewTerm : Long("<num>"),
numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd : Long("<num>"),
averageCatchUpOps : <double>
}
electionMetrics.stepUpCmd

Metrics on elections that were called by the mongod instance as part of an election handoff when the primary stepped down.

The stepUpCmd includes both the number of elections called and the number of elections that succeeded.

electionMetrics.priorityTakeover

Metrics on elections that were called by the mongod instance because its priority is higher than the primary's.

The electionMetrics.priorityTakeover includes both the number of elections called and the number of elections that succeeded.

electionMetrics.catchUpTakeover

Metrics on elections called by the mongod instance because it is more current than the primary.

The catchUpTakeover includes both the number of elections called and the number of elections that succeeded.

electionMetrics.electionTimeout

Metrics on elections called by the mongod instance because it has not been able to reach the primary within settings.electionTimeoutMillis.

The electionTimeout includes both the number of elections called and the number of elections that succeeded.

electionMetrics.freezeTimeout

Metrics on elections called by the mongod instance after its freeze period (during which the member cannot seek an election) has expired.

The electionMetrics.freezeTimeout includes both the number of elections called and the number of elections that succeeded.

electionMetrics.numStepDownsCausedByHigherTerm
Number of times the mongod instance stepped down because it saw a higher term (specifically, other member(s) participated in additional elections).
electionMetrics.numCatchUps
Number of elections where the mongod instance as the newly-elected primary had to catch up to the highest known oplog entry.
electionMetrics.numCatchUpsSucceeded
Number of times the mongod instance as the newly-elected primary successfully caught up to the highest known oplog entry.
electionMetrics.numCatchUpsAlreadyCaughtUp
Number of times the mongod instance as the newly-elected primary concluded its catchup process because it was already caught up when elected.
electionMetrics.numCatchUpsSkipped
Number of times the mongod instance as the newly-elected primary skipped the catchup process.
electionMetrics.numCatchUpsTimedOut
Number of times the mongod instance as the newly-elected primary concluded its catchup process because of the settings.catchUpTimeoutMillis limit.
electionMetrics.numCatchUpsFailedWithError
Number of times the newly-elected primary's catchup process failed with an error.
electionMetrics.numCatchUpsFailedWithNewTerm
Number of times the newly-elected primary's catchup process concluded because another member(s) had a higher term (specifically, other member(s) participated in additional elections).
electionMetrics.numCatchUpsFailedWithReplSetAbortPrimaryCatchUpCmd
Number of times the newly-elected primary's catchup process concluded because the mongod received the replSetAbortPrimaryCatchUp command.
electionMetrics.averageCatchUpOps
Average number of operations applied during the newly-elected primary's catchup processes.

extra_info

extra_info : {
note : 'fields vary by platform',
page_faults : <num>
},
extra_info
A document that provides additional information about the underlying system.
extra_info.note
A string with the text 'fields vary by platform'
extra_info.page_faults

The total number of page faults. The extra_info.page_faults counter may increase dramatically during moments of poor performance and may correlate with limited memory environments and larger data sets. Limited and sporadic page faults do not necessarily indicate an issue.

Windows differentiates "hard" page faults involving disk I/O from "soft" page faults that only require moving pages in memory. MongoDB counts both hard and soft page faults in this statistic.

flowControl

flowControl : {
enabled : <boolean>,
targetRateLimit : <int>,
timeAcquiringMicros : Long("<num>"),
locksPerKiloOp : <double>,
sustainerRate : <int>,
isLagged : <boolean>,
isLaggedCount : <int>,
isLaggedTimeMicros : Long("<num>")
},
flowControl
A document that returns statistics on the Flow Control. With flow control enabled, as the majority commit point lag grows close to the flowControlTargetLagSeconds, writes on the primary must obtain tickets before taking locks. As such, the metrics returned are meaningful when run on the primary.
flowControl.enabled

A boolean that indicates whether Flow Control is enabled (true) or disabled (false).

See also enableFlowControl.

flowControl.targetRateLimit

When run on the primary, the maximum number of tickets that can be acquired per second.

When run on a secondary, the returned number is a placeholder.

flowControl.timeAcquiringMicros

When run on the primary, the total time write operations have waited to acquire a ticket.

When run on a secondary, the returned number is a placeholder.

flowControl.locksPerKiloOp

When run on the primary, an approximation of the number of locks taken per 1000 operations.

When run on a secondary, the returned number is a placeholder.

flowControl.sustainerRate

When run on the primary, an approximation of operations applied per second by the secondary that is sustaining the commit point.

When run on a secondary, the returned number is a placeholder.

flowControl.isLagged

When run on the primary, a boolean that indicates whether flow control has engaged. Flow control engages when the majority committed lag is greater than some percentage of the configured flowControlTargetLagSeconds.

Replication lag can occur without engaging flow control. An unresponsive secondary might lag without the replica set receiving sufficient load to engage flow control, leaving the flowControl.isLagged value at false.

For additional information, see Flow Control.

flowControl.isLaggedCount

When run on a primary, the number of times flow control has engaged since the last restart. Flow control engages when the majority committed lag is greater than some percentage of the flowControlTargetLagSeconds.

When run on a secondary, the returned number is a placeholder.

flowControl.isLaggedTimeMicros

When run on the primary, the amount of time flow control has spent being engaged since the last restart. Flow control engages when the majority committed lag is greater than some percentage of the flowControlTargetLagSeconds.

When run on a secondary, the returned number is a placeholder.

globalLock

globalLock : {
totalTime : Long("<num>"),
currentQueue : {
total : <num>,
readers : <num>,
writers : <num>
},
activeClients : {
total : <num>,
readers : <num>,
writers : <num>
}
},
globalLock

A document that reports on the database's lock state.

Generally, the locks document provides more detailed data on lock uses.

globalLock.totalTime
The time, in microseconds, since the database last started and created the globalLock. This is approximately equivalent to the total server uptime.
globalLock.currentQueue
A document that provides information concerning the number of operations queued because of a lock.
globalLock.currentQueue.total

The total number of operations queued waiting for the lock (i.e., the sum of globalLock.currentQueue.readers and globalLock.currentQueue.writers).

A consistently small queue, particularly of shorter operations, should cause no concern. The globalLock.activeClients readers and writers information provides context for this data.

globalLock.currentQueue.readers
The number of operations that are currently queued and waiting for the read lock. A consistently small read queue, particularly of shorter operations, should cause no concern.
globalLock.currentQueue.writers
The number of operations that are currently queued and waiting for the write lock. A consistently small write queue, particularly of shorter operations, is no cause for concern.
globalLock.activeClients

A document that provides information about the number of connected clients and the read and write operations performed by these clients.

Use this data to provide context for the globalLock.currentQueue data.

globalLock.activeClients.total
The total number of internal client connections to the database including system threads as well as queued readers and writers. This metric will be higher than the total of activeClients.readers and activeClients.writers due to the inclusion of system threads.
globalLock.activeClients.readers
The number of the active client connections performing read operations.
globalLock.activeClients.writers
The number of active client connections performing write operations.

indexBuilds

indexBuilds : {
total : <num>,
killedDueToInsufficientDiskSpace : <num>,
failedDueToDataCorruption : <num>
},
indexBuilds
Provides metrics on index builds after the server last started.
indexBuilds.total
Total number of index builds.
indexBuilds.killedDueToInsufficientDiskSpace

Total number of index builds that were ended because of insufficient disk space. Starting in MongoDB 7.1, you can set the minimum amount of disk space required for building indexes using the indexBuildMinAvailableDiskSpaceMB parameter.

New in version 7.1.在版本7.1中新增。

indexBuilds.failedDueToDataCorruption

Total number of index builds that failed because of data corruption.

New in version 7.1.在版本7.1中新增。

indexBulkBuilder

indexBulkBuilder: {
count: <long>,
resumed: <long>,
filesOpenedForExternalSort: <long>,
filesClosedForExternalSort: <long>,
spilledRanges: <long>,
bytesSpilledUncompressed: <long>,
bytesSpilled: <long>,
numSorted: <long>,
bytesSorted: <long>,
memUsage: <long>
}
indexBulkBuilder
Provides metrics for index bulk builder operations. Use these metrics to diagnose index build issues with createIndexes, collection cloning during initial sync, index builds that resume after startup, and statistics on disk usage by the external sorter.
indexBuildBuilder.bytesSpilled

New in version 6.0.4.在版本6.0.4中新增。

The number of bytes written to disk by the external sorter.

indexBuilder.bytesSpilledUncompressed

New in version 6.0.4.在版本6.0.4中新增。

The number of bytes to be written to disk by the external sorter before compression.

indexBulkBuilder.count
The number of instances of the bulk builder created.
indexBulkBuilder.filesClosedForExternalSort
The number of times the external sorter closed a file handle to spill data to disk. Combine this value with filesOpenedForExternalSort to determine the number of open file handles in use by the external sorter.
indexBulkBuilder.filesOpenedForExternalSort
The number of times the external sorter opened a file handle to spill data to disk. Combine this value with filesClosedForExternalSort to determine the number of open file handles in use by the external sorter.
indexBulkBuilder.resumed
The number of times the bulk builder was created for a resumable index build.
indexBulkBuilder.spilledRanges

New in version 6.0.4.在版本6.0.4中新增。

The number of times the external sorter spilled to disk.

indexBulkBuilder.numSorted

New in version 6.3.在版本6.3中新增。

The total number of sorted documents.

indexBulkBuilder.bytesSorted

New in version 6.3.在版本6.3中新增。

The total number of bytes for sorted documents. For example, if a total of 10 documents were sorted and each document is 20 bytes, the total number of bytes sorted is 200.

indexBulkBuilder.memUsage

New in version 6.3.在版本6.3中新增。

The current bytes of memory allocated for building indexes.

indexStats

indexStats: {
count: Long("<num>"),
features: {
'2d': { count: Long("<num>"), accesses: Long("<num>") },
'2dsphere': { count: Long("<num>"), accesses: Long("<num>") },
'2dsphere_bucket': { count: Long("<num>"), accesses: Long("<num>") },
collation: { count: Long("<num>"), accesses: Long("<num>") },
compound: { count: Long("<num>"), accesses: Long("<num>") },
hashed: { count: Long("<num>"), accesses: Long("<num>") },
id: { count: Long("<num>"), accesses: Long("<num>") },
normal: { count: Long("<num>"), accesses: Long("<num>") },
partial: { count: Long("<num>"), accesses: Long("<num>") },
prepareUnique: { count: Long("<num>"), accesses: Long("<num>") }, // Added in 8.1 (and 8.0.4 and 7.0.14)
single: { count: Long("<num>"), accesses: Long("<num>") },
sparse: { count: Long("<num>"), accesses: Long("<num>") },
text: { count: Long("<num>"), accesses: Long("<num>") },
ttl: { count: Long("<num>"), accesses: Long("<num>") },
unique: { count: Long("<num>"), accesses: Long("<num>") },
wildcard: { count: Long("<num>"), accesses: Long("<num>") }
}
}
indexStats

A document that reports statistics on all indexes on databases and collections in non-system namespaces only. indexStats does not report statistics on indexes in the admin, local, and config databases.

New in version 6.0.在版本6.0中新增。

indexStats.count

The total number of indexes.

New in version 6.0.在版本6.0中新增。

indexStats.features

A document that provides counters for each index type and the number of accesses on each index. Each index type under indexStats.features has a count field that counts the total number of indexes for that type, and an accesses field that counts the number of accesses on that index.

New in version 6.0.在版本6.0中新增。

Instance Information

host : <string>,
advisoryHostFQDNs : <array>,
version : <string>,
process : <'mongod'|'mongos'>,
service : <'router'|'shard'>,
pid : Long("<num>"),
uptime : <num>,
uptimeMillis : Long("<num>"),
uptimeEstimate : Long("<num>"),
localTime : ISODate("<Date>"),
host
The system's hostname. In Unix/Linux systems, this should be the same as the output of the hostname command.
advisoryHostFQDNs
An array of the system's fully qualified domain names (FQDNs).
version
The MongoDB version of the current MongoDB process.
process
The current MongoDB process. Possible values are: mongos or mongod.
service

The role of the current MongoDB process. Possible values are router or shard.

New in version 8.0.在版本8.0中新增。

pid
The process ID number.
uptime
The number of seconds that the current MongoDB process has been active.
uptimeMillis
The number of milliseconds that the current MongoDB process has been active.
uptimeEstimate
The uptime in seconds as calculated from MongoDB's internal course-grained time keeping system.
localTime
The ISODate representing the current time, according to the server, in UTC.

locks

locks : {
<type> : {
acquireCount : {
<mode> : Long("<num>"),
...
},
acquireWaitCount : {
<mode> : Long("<num>"),
...
},
timeAcquiringMicros : {
<mode> : Long("<num>"),
...
},
deadlockCount : {
<mode> : Long("<num>"),
...
}
},
...
locks

A document that reports for each lock <type>, data on lock <modes>.

The possible lock <types> are:

Lock TypeDescription描述
ParallelBatchWriterMode

Represents a lock for parallel batch writer mode.

In earlier versions, PBWM information was reported as part of the Global lock information.

ReplicationStateTransitionRepresents lock taken for replica set member state transitions.
GlobalRepresents global lock.
DatabaseRepresents database lock.
CollectionRepresents collection lock.
MutexRepresents mutex.
MetadataRepresents metadata lock.
DDLDatabase

Represents a DDL database lock.

New in version 7.1.在版本7.1中新增。

DDLCollection

Represents a DDL collection lock.

New in version 7.1.在版本7.1中新增。

oplogRepresents lock on the oplog.

The possible <modes> are:

Lock ModeDescription描述
RRepresents Shared (S) lock.
WRepresents Exclusive (X) lock.
rRepresents Intent Shared (IS) lock.
wRepresents Intent Exclusive (IX) lock.

All values are of the Long() type.

locks.<type>.acquireCount
Number of times the lock was acquired in the specified mode.
locks.<type>.acquireWaitCount
Number of times the locks.<type>.acquireCount lock acquisitions encountered waits because the locks were held in a conflicting mode.
locks.<type>.timeAcquiringMicros

Cumulative wait time in microseconds for the lock acquisitions.

locks.<type>.timeAcquiringMicros divided by locks.<type>.acquireWaitCount gives an approximate average wait time for the particular lock mode.

locks.<type>.deadlockCount
Number of times the lock acquisitions encountered deadlocks.

logicalSessionRecordCache

logicalSessionRecordCache : {
activeSessionsCount : <num>,
sessionsCollectionJobCount : <num>,
lastSessionsCollectionJobDurationMillis : <num>,
lastSessionsCollectionJobTimestamp : <Date>,
lastSessionsCollectionJobEntriesRefreshed : <num>,
lastSessionsCollectionJobEntriesEnded : <num>,
lastSessionsCollectionJobCursorsClosed : <num>,
transactionReaperJobCount : <num>,
lastTransactionReaperJobDurationMillis : <num>,
lastTransactionReaperJobTimestamp : <Date>,
lastTransactionReaperJobEntriesCleanedUp : <num>,
sessionCatalogSize : <num>
},
logicalSessionRecordCache
Provides metrics around the caching of server sessions.
logicalSessionRecordCache.activeSessionsCount

The number of all active local sessions cached in memory by the mongod or mongos instance since the last refresh period.

logicalSessionRecordCache.sessionsCollectionJobCount

The number that tracks the number of times the refresh process has run on the config.system.sessions collection.

logicalSessionRecordCache.lastSessionsCollectionJobDurationMillis
The length in milliseconds of the last refresh.
logicalSessionRecordCache.lastSessionsCollectionJobTimestamp
The time at which the last refresh occurred.
logicalSessionRecordCache.lastSessionsCollectionJobEntriesRefreshed
The number of sessions that were refreshed during the last refresh.
logicalSessionRecordCache.lastSessionsCollectionJobEntriesEnded
The number of sessions that ended during the last refresh.
logicalSessionRecordCache.lastSessionsCollectionJobCursorsClosed
The number of cursors that were closed during the last config.system.sessions collection refresh.
logicalSessionRecordCache.transactionReaperJobCount
The number that tracks the number of times the transaction record cleanup process has run on the config.transactions collection.
logicalSessionRecordCache.lastTransactionReaperJobDurationMillis
The length (in milliseconds) of the last transaction record cleanup.
logicalSessionRecordCache.lastTransactionReaperJobTimestamp
The time of the last transaction record cleanup.
logicalSessionRecordCache.lastTransactionReaperJobEntriesCleanedUp
The number of entries in the config.transactions collection that were deleted during the last transaction record cleanup.
logicalSessionRecordCache.sessionCatalogSize

mem

mem : {
bits : <int>,
resident : <int>,
virtual : <int>,
supported : <boolean>
},
mem
A document that reports on the system architecture of the mongod and current memory use.
mem.bits
A number, either 64 or 32, that indicates whether the MongoDB instance is compiled for 64-bit or 32-bit architecture.
mem.resident
The value of mem.resident is roughly equivalent to the amount of RAM, in mebibyte (MiB), currently used by the database process. During normal use, this value tends to grow. In dedicated database servers, this number tends to approach the total amount of system memory.
mem.virtual
mem.virtual displays the quantity, in mebibyte (MiB), of virtual memory used by the mongod process.
mem.supported
A boolean that indicates whether the underlying system supports extended memory information. If this value is false and the system does not support extended memory information, then other mem values may not be accessible to the database server.
mem.note

The field mem.note appears if mem.supported is false.

The mem.note field contains the text: 'not all mem info support on this platform'.

metrics

metrics : {
abortExpiredTransactions: {
passes: <integer>,
successfulKills: <integer>,
timedOutKills: <integer>
},
apiVersions: {
<appName1>: <string>,
<appName2>: <string>,
<appName3>: <string>
},
aggStageCounters : {
<aggregation stage> : Long("<num>")
},
changeStreams: {
largeEventsFailed: Long("<num>"),
largeEventsSplit: Long("<num>"),
showExpandedEvents: Long("<num>")
},
commands: {
<command>: {
failed: Long("<num>"),
validator: {
total: Long("<num>"),
failed: Long("<num>"),
jsonSchema: Long("<num>")
},
total: Long("<num>"),
rejected: Long("<num>")
}
},
cursor : {
moreThanOneBatch : Long("<num>"),
timedOut : Long("<num>"),
totalOpened : Long("<num>"),
lifespan : {
greaterThanOrEqual10Minutes : Long("<num>"),
lessThan10Minutes : Long("<num>"),
lessThan15Seconds : Long("<num>"),
lessThan1Minute : Long("<num>"),
lessThan1Second : Long("<num>"),
lessThan30Seconds : Long("<num>"),
lessThan5Seconds : Long("<num>")
},
open : {
noTimeout : Long("<num>"),
pinned : Long("<num>"),
multiTarget : Long("<num>"),
singleTarget : Long("<num>"),
total : Long("<num>")
}
},
document : {
deleted : Long("<num>"),
inserted : Long("<num>"),
returned : Long("<num>"),
updated : Long("<num>")
},
dotsAndDollarsFields : {
inserts : Long("<num>"),
updates : Long("<num>")
},
getLastError : {
wtime : {
num : <num>,
totalMillis : <num>
},
wtimeouts : Long("<num>"),
default : {
unsatisfiable : Long("<num>"),
wtimeouts : Long("<num>")
}
},
mongos : {
cursor : {
moreThanOneBatch : Long("<num>"),
totalOpened : Long("<num>")
}
},
network : { // Added in MongoDB 6.3
totalEgressConnectionEstablishmentTimeMillis : Long("<num>"),
totalIngressTLSConnections : Long("<num>"),
totalIngressTLSHandshakeTimeMillis : Long("<num>"),
totalTimeForEgressConnectionAcquiredToWireMicros : Long("<num>"),
totalTimeToFirstNonAuthCommandMillis : Long("<num>")
"averageTimeToCompletedTLSHandshakeMicros": Long("<num>"), // Added in MongoDB 8.2
"averageTimeToCompletedHelloMicros": Long("<num>"), // Added in MongoDB 8.2
"averageTimeToCompletedAuthMicros": Long("<num>") // Added in MongoDB 8.2
},
operation : {
killedDueToClientDisconnect : Long("<num>"), // Added in MongoDB 7.1
killedDueToDefaultMaxTimeMSExpired : Long("<num>"),
killedDueToMaxTimeMSExpired : Long("<num>"), // Added in MongoDB 7.2
killedDueToRangeDeletion: Long("<num>"), // Added in MongoDB 8.2
numConnectionNetworkTimeouts : Long("<num>"), // Added in MongoDB 6.3
scanAndOrder : Long("<num>"),
totalTimeWaitingBeforeConnectionTimeoutMillis : Long("<num>"), // Added in MongoDB 6.3
unsendableCompletedResponses : Long("<num>"), // Added in MongoDB 7.1
writeConflicts : Long("<num>")
},
operatorCounters : {
expressions : {
<command> : Long("<num>")
},
match : {
<command> : Long("<num>")
}
},
query: {
allowDiskUseFalse: Long("<num>"),
updateOneOpStyleBroadcastWithExactIDCount: Long("<num>"),
bucketAuto: {
spilledBytes: Long("<num>"),
spilledDataStorageSize: Long("<num>"),
spilledRecords: Long("<num>"),
spills: Long("<num>")
},
lookup: {
hashLookup: Long("<num>"),
hashLookupSpillToDisk: Long("<num>"),
indexedLoopJoin: Long("<num>"),
nestedLoopJoin: Long("<num>")
},
multiPlanner: {
classicCount: Long("<num>"),
classicMicros: Long("<num>"),
classicWorks: Long("<num>"),
sbeCount: Long("<num>"),
sbeMicros: Long("<num>"),
sbeNumReads: Long("<num>"),
histograms: {
classicMicros: [
{ lowerBound: Long("0"), count: Long("<num>") },
{ < Additional histogram groups not shown. > },
{ lowerBound: Long("1073741824"), count: Long("<num>")> }>
],
classicNumPlans: [
{ lowerBound: Long("0"), count: Long("<num>") },
{ < Additional histogram groups not shown. > },
{ lowerBound: Long("32"), count: Long("<num>") }
],
classicWorks: [
{ lowerBound: Long("0"), count: Long("<num>") },
{ < Additional histogram groups not shown. > },
{ lowerBound: Long("32768"), count: Long("<num>") }
],
sbeMicros: [
{ lowerBound: Long("0"), count: Long("<num>") },
{ < Additional histogram groups not shown. > },
{ lowerBound: Long("1073741824"), count: Long("<num>") }
],
sbeNumPlans: [
{ lowerBound: Long("0"), count: Long("<num>") },
{ < Additional histogram groups not shown. > },
{ lowerBound: Long("32"), count: Long("<num>") }
],
sbeNumReads: [
{ lowerBound: Long("0"), count: Long("<num>") },
{ < Additional histogram groups not shown. > },
{ lowerBound: Long("32768"), count: Long("<num>") }
]
}
},
planCache: {
classic: { hits: Long("<num>"), misses: Long("<num>"), replanned: Long("<num>") },
sbe: { hits: Long("<num>"), misses: Long("<num>"), replanned: Long("<num>") }
},
queryFramework: {
aggregate: {
classicHybrid: Long("<num>"),
classicOnly: Long("<num>"),
cqf: Long("<num>"),
sbeHybrid: Long("<num>"),
sbeOnly: Long("<num>")
},
find: { classic: Long("<num>"), cqf: Long("<num>"), sbe: Long("<num>") }
}
},
queryExecutor: {
scanned : Long("<num>"),
scannedObjects : Long("<num>"),
collectionScans : {
nonTailable : Long("<num>"),
total : Long("<num>")
},
profiler : {
collectionScans : {
nonTailable : Long("<num>"),
tailable : Long("<num>"),
total : Long("<num>")
}
}
},
record : {
moves : Long("<num>")
},
repl : {
executor : {
pool : {
inProgressCount : <num>
},
queues : {
networkInProgress : <num>,
sleepers : <num>
},
unsignaledEvents : <num>,
shuttingDown : <boolean>,
networkInterface : <string>
},
apply : {
attemptsToBecomeSecondary : Long("<num>"),
batchSize: <num>,
batches : {
num : <num>,
totalMillis : <num>
},
ops : Long("<num>")
},
write : {
batchSize: <num>,
batches : {
num : <num>,
totalMillis : <num>
}
},
buffer : {
write: {
count : Long("<num>"),
maxSizeBytes : Long("<num>"),
sizeBytes : Long("<num>")
},
apply: {
count : Long("<num>"),
sizeBytes : Long("<num>"),
maxSizeBytes : Long("<num>"),
maxCount: Long("<num>")
},
},
initialSync : {
completed : Long("<num>"),
failedAttempts : Long("<num>"),
failures : Long("<num>")
},
network : {
bytes : Long("<num>"),
getmores : {
num : <num>,
totalMillis : <num>
},
notPrimaryLegacyUnacknowledgedWrites : Long("<num>"),
notPrimaryUnacknowledgedWrites : Long("<num>"),
oplogGetMoresProcessed : {
num : <num>,
totalMillis : <num>
},
ops : Long("<num>"),
readersCreated : Long("<num>"),
replSetUpdatePosition : {
num : Long("<num>")
}
},
reconfig : {
numAutoReconfigsForRemovalOfNewlyAddedFields : Long("<num>")
},
stateTransition : {
lastStateTransition : <string>,
totalOperationsKilled : Long("<num>"),
totalOperationsRunning : Long("<num>")
},
syncSource : {
numSelections : Long("<num>"),
numTimesChoseSame : Long("<num>"),
numTimesChoseDifferent : Long("<num>"),
numTimesCouldNotFind : Long("<num>")
},
waiters : {
opTime : Long("<num>"),
replication : Long("<num>"),
replCoordMutexTotalWaitTimeInOplogServerStatusMillis: Long("<num>")
}
},
storage : {
freelist : {
search : {
bucketExhausted : <num>,
requests : <num>,
scanned : <num>
}
}
},
ttl : {
deletedDocuments : Long("<num>"),
passes : Long("<num>"),
subPasses : Long("<num>")
}
}
metrics
A document that returns various statistics that reflect the current use and state of a running mongod instance.
metrics.abortExpiredTransactions
Document that returns statistics on the current state of the abortExpiredTransactions thread.
metrics.abortExpiredTransactions.passes

Indicates the number of successful passes aborting transactions older than the transactionLifetimeLimitSeconds parameter.

If the passes value stops incrementing, it indicates that the abortExpiredTransactions thread may be stuck.

metrics.abortExpiredTransactions.successfulKills

Number of expired transactions successfully ended by MongoDB.

A session is checked out from a session pool to run database operations.

AbortExpiredTransactionsSessionCheckoutTimeout sets the maximum number of milliseconds for a session to be checked out when attempting to end an expired transaction.

If the expired transaction is successfully ended, MongoDB increments metrics.abortExpiredTransactions.successfulKills. If the transaction isn't successfully ended because it timed out when attempting to check out a session, MongoDB increments metrics.abortExpiredTransactions.timedOutKills.

New in version 8.1.在版本8.1中新增。 (also available in 8.0.13)

metrics.abortExpiredTransactions.timedOutKills

Number of expired transactions unsuccessfully ended by MongoDB because it timed out when attempting to check out a session.

A session is checked out from a session pool to run database operations.

AbortExpiredTransactionsSessionCheckoutTimeout sets the maximum number of milliseconds for a session to be checked out when attempting to end an expired transaction.

If the expired transaction is successfully ended, MongoDB increments metrics.abortExpiredTransactions.successfulKills. If the transaction isn't successfully ended because it timed out when attempting to check out a session, MongoDB increments metrics.abortExpiredTransactions.timedOutKills.

New in version 8.1.在版本8.1中新增。 (also available in 8.0.13)

metrics.aggStageCounters

A document that reports on the use of aggregation pipeline stages. The fields in metrics.aggStageCounters are the names of aggregation pipeline stages. For each pipeline stage, serverStatus reports the number of times that stage has been executed.

Updated in version 5.2 (and 5.0.6).

metrics.apiVersions

A document that contains:

  • The name of each client application
  • The Stable API version that each application was configured with within the last 24-hour period

Consider the following when viewing metrics.apiVersions:

  • The possible returned values for each appname are:

    • default: The command was issued without a Stable API version specified.
    • 1: The command was issued with Stable API version 1.

    Note

    You may see both return values for an appname because you can specify a Stable API version at the command level. Some of your commands may have been issued with no Stable API version, while others were issued with version 1.

  • API version metrics are retained for 24 hours. If no commands are issued with a specific API version from an application in the past 24 hours, that appname and API version will be removed from the metrics. This also applies to the default API version metric.
  • Set the appname when connecting to a MongoDB instance by specifying the appname in the connection URI. ?appName=ZZZ sets the appname to ZZZZ.
  • Drivers accessing the Stable API can set a default appname.
  • If no appname is configured, a default value will be automatically populated based on the product. For example, for a MongoDB Compass connection with no appname in the URI, the metric returns: 'MongoDB Compass': [ 'default' ].

New in version 5.0..在版本5.0.中新增。/p>

metrics.operatorCounters
A document that reports on the use of aggregation pipeline operators and expressions.
metrics.operatorCounters.expressions

A document with a number that indicates how often Expressions ran.

To get metrics for a specific operator, such as the greater-than operator ($gt), append the operator to the command:

db.runCommand( { serverStatus: 1 } ).metrics.operatorCounters.expressions.$gt

New in version 5.0.在版本5.0中新增。

metrics.operatorCounters.match

A document with a number that indicates how often match expressions ran.

Match expression operators also increment as part of an aggregation pipeline $match stage. If the $match stage uses the $expr operator, the counter for $expr increments, but the component counters do not increment.

Consider the following query:

db.matchCount.aggregate(
[
{ $match:
{ $expr: { $gt: [ "$_id", 0 ] } }
}
]
)

The counter for $expr increments when the query runs. The counter for $gt does not.

metrics.changeStreams.largeEventsSplit

The number of change stream events larger than 16 MB that were split into smaller fragments. Events are only split if you use the $changeStreamSplitLargeEvent pipeline stage.

New in version 7.0.在版本7.0中新增。 (Also available in 6.0.9)

metrics.changeStreams

A document that reports information about change stream events larger than 16 MB.

New in version 7.0.在版本7.0中新增。

metrics.changeStreams.largeEventsFailed

The number of change stream events that caused a BSONObjectTooLarge exception because the event was larger than 16 MB. To prevent the exception, see $changeStreamSplitLargeEvent.

New in version 7.0.在版本7.0中新增。 (Also available in 6.0.9 and 5.0.19)

metrics.changeStreams.showExpandedEvents

The number of change stream cursors with the showExpandedEvents option set to true.

The counter for showExpandedEvents increments when you:

  • Open a change stream cursor.
  • Run the explain command on a change stream cursor.

New in version 7.1.在版本7.1中新增。

metrics.commands

A document that reports on the use of database commands. The fields in metrics.commands are the names of database commands. For each command, the serverStatus reports the total number of executions and the number of failed executions.

metrics.commands includes replSetStepDownWithForce (i.e. the replSetStepDown command with force: true) as well as the overall replSetStepDown. In earlier versions, the command reported only overall replSetStepDown metrics.

metrics.commands.<command>.failed
The number of times <command> failed on this mongod.
metrics.commands.<create or collMod>.validator
For the create and collMod commands, a document that reports on non-empty validator objects passed to the command to specify validation rules or expressions for the collection.
metrics.commands.<create or collMod>.validator.total
The number of times a non-empty validator object was passed as an option to the command on this mongod.
metrics.commands.<create or collMod>.validator.failed
The number of times a call to the command on this mongod failed with a non-empty validator object due to a schema validation error.
metrics.commands.<create or collMod>.validator.jsonSchema
The number of times a validator object with a $jsonSchema was passed as an option to the command on this mongod.
metrics.commands.<command>.total
The number of times <command> executed on this mongod.
metrics.commands.<command>.rejected

The number of times <command> was rejected on this mongod because the command or operation has an associated query setting where the reject field is true.

To set the reject field, use setQuerySettings.

New in version 8.0.在版本8.0中新增。

metrics.commands.update.pipeline

The number of times an aggregation pipeline was used to update documents on this mongod. Subtract this value from the total number of updates to get the number of updates made with document syntax.

The pipeline counter is only available for update and findAndModify operations.

metrics.commands.findAndModify.pipeline

The number of times findAndModify() was used in an aggregation pipeline to update documents on this mongod.

The pipeline counter is only available for update and findAndModify operations.

metrics.commands.update.arrayFilters

The number of times an arrayFilter was used to update documents on this mongod.

The arrayFilters counter is only available for update and findAndModify operations.

metrics.commands.findAndModify.arrayFilters

The number of times an arrayFilter was used with findAndModify() to update documents on this mongod.

The arrayFilters counter is only available for update and findAndModify operations.

metrics.document
A document that reflects document access and modification patterns. Compare these values to the data in the opcounters document, which track total number of operations.
metrics.document.deleted
The total number of documents deleted.
metrics.document.inserted
The total number of documents inserted.
metrics.document.returned
The total number of documents returned by queries.
metrics.document.updated
The total number of documents matched for update operations. This value is not necessarily the same as the number of documents modified by updates.
metrics.dotsAndDollarsFields

A document with a number that indicates how often insert or update operations ran using a dollar ($) prefixed name. The value does not report the exact number of operations.

When an upsert operation creates a new document, it is considered to be an insert rather than an update.

New in version 5.0.在版本5.0中新增。

metrics.executor
A document that reports on various statistics for the replication executor.
metrics.getLastError
A document that reports on write concern use.
metrics.getLastError.wtime
A document that reports write concern operation counts with a w argument greater than 1.
metrics.getLastError.wtime.num
The total number of operations with a specified write concern (i.e. w) that wait for one or more members of a replica set to acknowledge the write operation (i.e. a w value greater than 1.)
metrics.getLastError.wtime.totalMillis
The total amount of time in milliseconds that the mongod has spent performing write concern operations with a write concern (i.e. w) that waits for one or more members of a replica set to acknowledge the write operation (i.e. a w value greater than 1.)
metrics.getLastError.wtimeouts
The number of times that write concern operations have timed out as a result of the wtimeout threshold. This number increments for both default and non-default write concern specifications.
metrics.getLastError.default

A document that reports on when a default write concern was used (meaning, a non-clientSupplied write concern). The possible origins of a default write concern are:

  • implicitDefault
  • customDefault
  • getLastErrorDefaults

Refer to the following table for information on each possible write concern origin, or provenance:

ProvenanceDescription描述
clientSuppliedThe write concern was specified in the application.
customDefaultThe write concern originated from a custom defined default value. See setDefaultRWConcern.
getLastErrorDefaultsThe write concern originated from the replica set's settings.getLastErrorDefaults field.
implicitDefaultThe write concern originated from the server in absence of all other write concern specifications.
metrics.getLastError.default.unsatisfiable
Number of times that a non-clientSupplied write concern returned the UnsatisfiableWriteConcern error code.
metrics.getLastError.default.wtimeouts
Number of times a non-clientSupplied write concern timed out.
metrics.mongos
A document that contains metrics about mongos.
metrics.mongos.cursor
A document that contains metrics for cursors used by mongos.
metrics.mongos.cursor.moreThanOneBatch

The total number of cursors that have returned more than one batch since mongos started. Additional batches are retrieved using the getMore command.

New in version 5.0.在版本5.0中新增。

metrics.mongos.cursor.totalOpened

The total number of cursors that have been opened since mongos started, including cursors currently open. Differs from metrics.cursor.open.total, which is the number of currently open cursors only.

New in version 5.0.在版本5.0中新增。

metrics.network

New in version 6.3.在版本6.3中新增。

A document that reports server network metrics.

metrics.network.totalEgressConnectionEstablishmentTimeMillis

New in version 6.3.在版本6.3中新增。

The total time in milliseconds to establish server connections.

metrics.network.totalIngressTLSConnections

New in version 6.3.在版本6.3中新增。

The total number of incoming connections to the server that use TLS. The number is cumulative and is the total after the server was started.

metrics.network.totalIngressTLSHandshakeTimeMillis

New in version 6.3.在版本6.3中新增。

The total time in milliseconds that incoming connections to the server have to wait for the TLS network handshake to complete. The number is cumulative and is the total after the server was started.

metrics.network.totalTimeForEgressConnectionAcquiredToWireMicros

New in version 6.3.在版本6.3中新增。

The total time in microseconds that operations wait between acquisition of a server connection and writing the bytes to send to the server over the network. The number is cumulative and is the total after the server was started.

metrics.network.totalTimeToFirstNonAuthCommandMillis

New in version 6.3.在版本6.3中新增。

The total time in milliseconds from accepting incoming connections to the server and receiving the first operation that isn't part of the connection authentication handshake. The number is cumulative and is the total after the server was started.

metrics.network.averageTimeToCompletedTLSHandshakeMicros

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1)

The average time in microseconds that it takes to complete a TLS handshake for incoming connections.

metrics.network.averageTimeToCompletedHelloMicros

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1)

The time in microseconds between the beginning of connection establishment and the completion of the hello command. You can use this metric to tune the ingressConnectionEstablishmentMaxQueueDepth and ingressConnectionEstablishmentRatePerSec to ensure that there is proper time allotted to complete the connection establishment after exiting the queue.

metrics.network.averageTimeToCompletedAuthMicros

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1)

The time in microseconds the SASL auth exchange takes to be completed after the beginning of connection establishment.

metrics.operation
A document that holds counters for several types of update and query operations that MongoDB handles using special operation types.
metrics.operation.killedDueToClientDisconnect

New in version 7.1.在版本7.1中新增。

Total number of operations cancelled before completion because the client disconnected.

metrics.operation.killedDueToDefaultMaxTimeMSExpired

New in version 8.0.在版本8.0中新增。

Total number of operations that timed out due to the cluster-level default timeout, defaultMaxTimeMS.

metrics.operation.killedDueToMaxTimeMSExpired

New in version 7.2.在版本7.2中新增。

Total number of operations that timed out due to the operation-level timeout, cursor.maxTimeMS().

metrics.operation.killedDueToRangeDeletion

New in version 8.2.在版本8.2中新增。

Total number of operations terminated because of orphan range cleanup. To learn more, see terminateSecondaryReadsOnOrphanCleanup.

metrics.operation.numConnectionNetworkTimeouts

New in version 6.3.在版本6.3中新增。

Total number of operations that failed because of server connection acquisition time out errors.

metrics.operation.scanAndOrder
The total number of queries that return sorted numbers that cannot perform the sort operation using an index.
metrics.operation.totalTimeWaitingBeforeConnectionTimeoutMillis

New in version 6.3.在版本6.3中新增。

Total time in milliseconds that operations waited before failing because of server connection acquisition time out errors.

metrics.operation.unsendableCompletedResponses

New in version 7.1.在版本7.1中新增。

Total number of operations that completed server-side but did not send their response to the client because the connection between the client and server failed or disconnected.

metrics.operation.writeConflicts
The total number of queries that encountered write conflicts.
metrics.query.bucketAuto.spilledBytes

The number of in-memory bytes spilled to disk by the $bucketAuto stage.

New in version 8.2.在版本8.2中新增。

metrics.query.bucketAuto.spilledDataStorageSize

The total disk space, in bytes, used by the spilled data from the $bucketAuto stage.

New in version 8.2.在版本8.2中新增。

metrics.query.bucketAuto.spilledRecords

The number of records spilled to disk by the $bucketAuto stage.

New in version 8.2.在版本8.2中新增。

metrics.query.bucketAuto.spills

The number of times the $bucketAuto stage spilled to disk.

New in version 8.2.在版本8.2中新增。

metrics.query.lookup

A document that provides detailed data on the use of the $lookup stage with the slot-based query execution engine. To learn more, see $lookup Optimization.

These metrics are primarily intended for internal use by MongoDB.

New in version 6.1.在版本6.1中新增。/p>

metrics.query.multiPlanner

Provides detailed query planning data for the slot-based query execution engine and the classic query engine. For more information on the slot-based query execution engine see: Slot-Based Query Execution Engine Pipeline Optimizations.

These metrics are primarily intended for internal use by MongoDB.

New in version 6.0.0 and 5.0.9

metrics.query.sort

A document that holds counters related to sort stages.

New in version 6.2.在版本6.2中新增。

metrics.query.sort.spillToDisk

The total number of writes to disk caused by sort stages.

New in version 6.2.在版本6.2中新增。

metrics.query.sort.totalBytesSorted

The total amount of sorted data in bytes.

New in version 6.2.在版本6.2中新增。

metrics.query.sort.totalKeysSorted

The total number of keys used in sorts.

New in version 6.2.在版本6.2中新增。

query.multiPlanner.classicMicros
Aggregates the total number of microseconds spent in the classic multiplanner.
query.multiPlanner.classicWorks
Aggregates the total number of "works" performed in the classic multiplanner.
query.multiPlanner.classicCount
Aggregates the total number of invocations of the classic multiplanner.
query.multiPlanner.sbeMicros
Aggregates the total number of microseconds spent in the slot-based execution engine multiplanner.
query.multiPlanner.sbeNumReads
Aggregates the total number of reads done in the slot-based execution engine multiplanner.
query.multiPlanner.sbeCount
Aggregates the total number of invocations of the slot-based execution engine multiplanner.
query.multiPlanner.histograms.classicMicros
A histogram measuring the number of microseconds spent in an invocation of the classic multiplanner.
query.multiPlanner.histograms.classicWorks
A histogram measuring the number of "works" performed during an invocation of the classic multiplanner.
query.multiPlanner.histograms.classicNumPlans
A histogram measuring the number of plans in the candidate set during an invocation of the classic multiplanner.
query.multiPlanner.histograms.sbeMicros
A histogram measuring the number of microseconds spent in an invocation of the slot-based execution engine multiplanner.
query.multiPlanner.histograms.sbeNumReads
A histogram measuring the number of reads during an invocation of the slot-based execution engine multiplanner.
query.multiPlanner.histograms.sbeNumPlans
A histogram measuring the number of plans in the candidate set during an invocation of the slot-based execution engine multiplanner.
query.planning.fastPath.express

The number of queries that use an optimized index scan plan consisting of one of the following plan stages:

  • EXPRESS_CLUSTERED_IXSCAN
  • EXPRESS_DELETE
  • EXPRESS_IXSCAN
  • EXPRESS_UPDATE

For more information on query plans, see Explain Results.

New in version 8.1.在版本8.1中新增。

query.planning.fastPath.idHack

The number of queries that contain the _id field. For these queries, MongoDB uses the default index on the _id field and skips all query plan analysis.

New in version 8.1.在版本8.1中新增。

query.queryFramework.aggregate
A document that reports on the number of aggregation operations run on each query framework. The subfields in query.queryFramework.aggregate indicate the number of times each framework was used to perform an aggregation operation.
query.queryFramework.find
A document that reports on the number of find operations run on each query framework. The subfields in query.queryFramework.find indicate the number of times each framework was used to perform a find operation.
metrics.queryExecutor
A document that reports data from the query execution system.
metrics.queryExecutor.scanned
The total number of index items scanned during queries and query-plan evaluation. This counter is the same as totalKeysExamined in the output of explain().
metrics.queryExecutor.scannedObjects
The total number of documents scanned during queries and query-plan evaluation. This counter is the same as totalDocsExamined in the output of explain().
metrics.queryExecutor.collectionScans
A document that reports on the number of queries that performed a collection scan.
metrics.queryExecutor.collectionScans.nonTailable
The number of queries that performed a collection scan that did not use a tailable cursor.
metrics.queryExecutor.collectionScans.total
The total number queries that performed a collection scan. The total consists of queries that did and did not use a tailable cursor.
metrics.queryExecutor.profiler.collectionScans.nonTailable
The number of queries that performed a collection scan on a profile collection that did not use a tailable cursor.
metrics.queryExecutor.profiler.collectionScans.tailable
The number of queries that performed a collection scan on a profile collection that used a tailable cursor.
metrics.queryExecutor.profiler.collectionScans.total
The total number of queries that performed a collection scan on a profile collection. This includes queries that used both tailable and non-tailable cursors.
metrics.record
A document that reports on data related to record allocation in the on-disk memory files.
metrics.repl
A document that reports metrics related to the replication process. metrics.repl document appears on all mongod instances, even those that aren't members of replica sets.
metrics.repl.apply
A document that reports on the application of operations from the replication oplog.
metrics.repl.apply.batchSize

The total number of oplog operations applied. The metrics.repl.apply.batchSize is incremented with the number of operations in a batch at the batch boundaries instead of being incremented by one after each operation.

For finer granularity, see metrics.repl.apply.ops.

metrics.repl.apply.batches
metrics.repl.apply.batches reports on the oplog application process on secondaries members of replica sets. See Multithreaded Replication for more information on the oplog application processes.
metrics.repl.apply.batches.num
The total number of batches applied across all databases.
metrics.repl.apply.batches.totalMillis
The total amount of time in milliseconds the mongod has spent applying operations from the oplog.
metrics.repl.apply.ops

The total number of oplog operations applied. metrics.repl.apply.ops is incremented after each operation.

metrics.repl.write

Document that reports on entries written to the oplog.

New in version 8.0.在版本8.0中新增。

metrics.repl.write.batchSize

Total number of entries written to the oplog. This metric updates with the number of entries in each batch as the member finishes writing the batch to the oplog.

New in version 8.0.在版本8.0中新增。

metrics.repl.write.batches

Document that reports on the oplog writing process for secondary members.

New in version 8.0.在版本8.0中新增。

metrics.repl.write.batches.num

Total number of batches written across all databases.

New in version 8.0.在版本8.0中新增。

metrics.repl.write.batches.totalMillis

Total time in milliseconds the member has spent writing entries to the oplog.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer

MongoDB buffers oplog operations from the replication sync source buffer before applying oplog entries in a batch. metrics.repl.buffer provides a way to track oplog buffers. See Multithreaded Replication for more information on the oplog application process.

Changed in version 8.0.在版本8.0中的更改。

Starting in MongoDB 8.0, secondaries now update the local oplog and apply changes to the database in parallel. For each batch of oplog entries, MongoDB uses two buffers:

  • The write buffer receives new oplog entries from the primary. The writer adds these entries to the local oplog and sends them to the applier.
  • The apply buffer receives new oplog entries from the writer. The applier uses these entries to update the local database.

This is a breaking change as it deprecates the older metrics.repl.buffer status metrics.

metrics.repl.buffer.apply

Provides information on the status of the oplog apply buffer.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.apply.count

The current number of operations in the oplog apply buffer.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.apply.maxCount

Maximum number of operations in the oplog apply buffer. mongod sets this value using a constant, which is not configurable.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.apply.maxSizeBytes

Maximum size of the apply buffer. mongod sets this size using a constant, which is not configurable.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.apply.sizeBytes

The current size of the contents of the oplog apply buffer.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.count

Deprecated since version 8.0.

Starting in MongoDB 8.0, secondaries use separate buffers to write and apply oplog entries. For the current number of operations in the oplog buffers, see the apply.count or write.count status metrics.

metrics.repl.buffer.maxSizeBytes

Deprecated since version 8.0.

Starting in MongoDB 8.0, secondaries use separate buffers to write and apply oplog entries. For the maximum size of the buffers, see the apply.maxSizeBytes or write.maxSizeBytes status metrics.

metrics.repl.buffer.sizeBytes

Deprecated since version 8.0.

Starting in MongoDB 8.0, secondaries use separate buffers to write and apply oplog entries. For the current size of the oplog buffers, see the apply.sizeBytes or write.sizeBytes status metrics.

metrics.repl.buffer.write

Provides information on the status of the oplog write buffer.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.write.count

The current number of operations in the oplog write buffer.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.write.maxSizeBytes

Maximum size of the write buffer. mongod sets this value using a constant, which is not configurable.

New in version 8.0.在版本8.0中新增。

metrics.repl.buffer.write.sizeBytes

The current size of the contents of the oplog write buffer.

New in version 8.0.在版本8.0中新增。

metrics.repl.network
metrics.repl.network reports network use by the replication process.
metrics.repl.network.bytes
metrics.repl.network.bytes reports the total amount of data read from the replication sync source.
metrics.repl.network.getmores
metrics.repl.network.getmores reports on the getmore operations, which are requests for additional results from the oplog cursor as part of the oplog replication process.
metrics.repl.network.getmores.num
metrics.repl.network.getmores.num reports the total number of getmore operations, which are operations that request an additional set of operations from the replication sync source.
metrics.repl.network.getmores.totalMillis

metrics.repl.network.getmores.totalMillis reports the total amount of time required to collect data from getmore operations.

Note

This number can be quite large, as MongoDB will wait for more data even if the getmore operation does not initial return data.

metrics.repl.network.getmores.numEmptyBatches

The number of empty oplog batches a secondary receives from its sync source. A secondary receives an empty batch if it is fully synced with its source and either:

  • The getmore times out waiting for more data, or
  • The sync source's majority commit point has advanced since the last batch sent to this secondary.

For a primary, if the instance was previously a secondary, the number reports on the empty batches received when it was a secondary. Otherwise, for a primary, this number is 0.

metrics.repl.network.notPrimaryLegacyUnacknowledgedWrites
The number of unacknowledged (w: 0) legacy write operations (see Opcodes) that failed because the current mongod is not in PRIMARY state.
metrics.repl.network.notPrimaryUnacknowledgedWrites
The number of unacknowledged (w: 0) write operations that failed because the current mongod is not in PRIMARY state.
metrics.repl.network.oplogGetMoresProcessed
A document that reports the number of getMore commands to fetch the oplog that a node processed as a sync source.
metrics.repl.network.oplogGetMoresProcessed.num
The number of getMore commands to fetch the oplog that a node processed as a sync source.
metrics.repl.network.oplogGetMoresProcessed.totalMillis
The time, in milliseconds, that a node spent processing the getMore commands counted in metrics.repl.network.oplogGetMoresProcessed.num.
metrics.repl.network.ops
The total number of operations read from the replication source.
metrics.repl.network.readersCreated
The total number of oplog query processes created. MongoDB will create a new oplog query any time an error occurs in the connection, including a timeout, or a network operation. Furthermore, metrics.repl.network.readersCreated will increment every time MongoDB selects a new source for replication.
metrics.repl.network.replSetUpdatePosition
A document that reports the number of replSetUpdatePosition commands a node sent to its sync source.
metrics.repl.network.replSetUpdatePosition.num

The number of replSetUpdatePosition commands a node sent to its sync source. replSetUpdatePosition commands are internal replication commands that communicate replication progress from nodes to their sync sources.

Note

Replica set members in the STARTUP2 state do not send the replSetUpdatePosition command to their sync source.

metrics.repl.reconfig

A document containing the number of times that member newlyAdded fields were automatically removed by the primary. When a member is first added to the replica set, the member's newlyAdded field is set to true.

New in version 5.0.在版本5.0中新增。

metrics.repl.reconfig.numAutoReconfigsForRemovalOfNewlyAddedFields

The number of times that newlyAdded member fields were automatically removed by the primary. When a member is first added to the replica set, the member's newlyAdded field is set to true. After the primary receives the member's heartbeat response indicating the member state is SECONDARY, RECOVERING, or ROLLBACK, the primary automatically removes the member's newlyAdded field. The newlyAdded fields are stored in the local.system.replset collection.

New in version 5.0.在版本5.0中新增。

metrics.repl.stateTransition

Information on user operations when the member undergoes one of the following transitions that can stop user operations:

  • The member steps up to become a primary.
  • The member steps down to become a secondary.
  • The member is actively performing a rollback.
metrics.repl.stateTransition.lastStateTransition

The transition being reported:

State ChangeDescription描述
"stepUp"The member steps up to become a primary.
"stepDown"The member steps down to become a secondary.
"rollback"The member is actively performing a rollback.
""The member has not undergone any state changes.
metrics.repl.stateTransition.totalOperationsKilled

The total number of operations stopped during the mongod instance's state change.

New in version 7.3.在版本7.3中新增。 totalOperationsKilled replaces userOperationsKilled

metrics.repl.stateTransition.totalOperationsRunning

The total number of operations that remained running during the mongod instance's state change.

New in version 7.3.在版本7.3中新增。 totalOperationsRunning replaces userOperationsRunning

metrics.repl.stateTransition.userOperationsKilled

Deprecated since version 7.3: totalOperationsKilled replaces userOperationsKilled.

metrics.repl.stateTransition.userOperationsRunning

Deprecated since version 7.3: totalOperationsRunning replaces userOperationsRunning.

metrics.repl.syncSource
Information on a replica set node's sync source selection process.
metrics.repl.syncSource.numSelections
Number of times a node attempted to choose a node to sync from among the available sync source options. A node attempts to choose a node to sync from if, for example, the sync source is re-evaluated or the node receives an error from its current sync source.
metrics.repl.syncSource.numTimesChoseSame
Number of times a node kept its original sync source after re-evaluating if its current sync source was optimal.
metrics.repl.syncSource.numTimesChoseDifferent
Number of times a node chose a new sync source after re-evaluating if its current sync source was optimal.
metrics.repl.syncSource.numTimesCouldNotFind
Number of times a node could not find an available sync source when attempting to choose a node to sync from.
metrics.repl.timestamps

A document that reports on the replication timestamps.

New in version 8.1.在版本8.1中新增。

metrics.repl.timestamps.oldestTimestamp

The timestamp for the oldest snapshot. A snapshot is a copy of the data in a mongod instance at a specific point in time.

New in version 8.1.在版本8.1中新增。

metrics.repl.waiters.replication

The number of threads waiting for replicated or journaled write concern acknowledgments.

New in version 7.3.在版本7.3中新增。

metrics.repl.waiters.opTime

The number of threads queued for local replication optime assignments.

New in version 7.3.在版本7.3中新增。

metrics.repl.waiters.replCoordMutexTotalWaitTimeInOplogServerStatusMillis

The average wait time in milliseconds to acquire the replication coordinator mutex. MongoDB measures this time when it generates the server status oplog section. This metric helps you identify potential replication performance issues related to mutex contention.

New in version 8.2.在版本8.2中新增。

metrics.storage.freelist.search.bucketExhausted
The number of times that mongod has examined the free list without finding a large record allocation.
metrics.storage.freelist.search.requests
The number of times mongod has searched for available record allocations.
metrics.storage.freelist.search.scanned
The number of available record allocations mongod has searched.
metrics.ttl
A document that reports on the operation of the resource use of the ttl index process.
metrics.ttl.deletedDocuments
The total number of documents deleted from collections with a ttl index.
metrics.ttl.invalidTTLIndexSkips

Number of TTL deletes skipped due to a TTL secondary index being present, but not valid for TTL deletion.

  • 0 indicates all secondary TTL indexes are eligible for TTL deletion.
  • A non-zero value indicates there is an invalid secondary TTL index.

If there is an invalid secondary TTL index, you must manually modify the secondary index to use automatic TTL deletion.

New in version 8.1.在版本8.1中新增。

metrics.ttl.passes
Number of passes performed by the TTL background process to check for expired documents. A pass is complete when the TTL monitor has deleted as many candidate documents as it can find from all TTL indexes. For more information on the TTL index deletion process, see Deletion Process.
metrics.ttl.subPasses
Number of sub-passes performed by the TTL background process to check for expired documents. For more information on the TTL index deletion process, see Deletion Process.
metrics.cursor
A document that contains data regarding cursor state and use.
metrics.cursor.moreThanOneBatch

The total number of cursors that have returned more than one batch since the server process started. Additional batches are retrieved using the getMore command.

New in version 5.0.在版本5.0中新增。

metrics.cursor.timedOut
The total number of cursors that have timed out since the server process started. If this number is large or growing at a regular rate, this may indicate an application error.
metrics.cursor.totalOpened

The total number of cursors that have been opened since the server process started, including cursors currently open. Differs from metrics.cursor.open.total, which is the number of currently open cursors only.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan

A document that reports the number of cursors that have lifespans within specified time periods. The cursor lifespan is the time period from when the cursor is created to when the cursor is killed using the killCursors command or the cursor has no remaining objects in the batch.

The lifespan time periods are:

  • < 1 second
  • >= 1 second to < 5 seconds
  • >= 5 seconds to < 15 seconds
  • >= 15 seconds to < 30 seconds
  • >= 30 seconds to < 1 minute
  • >= 1 minute to < 10 minutes
  • >= 10 minutes

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.greaterThanOrEqual10Minutes

The number of cursors with a lifespan >= 10 minutes.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.lessThan10Minutes

The number of cursors with a lifespan >= 1 minute to < 10 minutes.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.lessThan15Seconds

The number of cursors with a lifespan >= 5 seconds to < 15 seconds.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.lessThan1Minute

The number of cursors with a lifespan >= 30 seconds to < 1 minute.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.lessThan1Second

The number of cursors with a lifespan < 1 second.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.lessThan30Seconds

The number of cursors with a lifespan >= 15 seconds to < 30 seconds.

New in version 5.0.在版本5.0中新增。

metrics.cursor.lifespan.lessThan5Seconds

The number of cursors with a lifespan >= 1 second to < 5 seconds.

New in version 5.0.在版本5.0中新增。

metrics.cursor.open
A document that contains data regarding open cursors.
metrics.cursor.open.noTimeout
The number of open cursors with the option DBQuery.Option.noTimeout set to prevent timeout after a period of inactivity.
metrics.cursor.open.pinned
The number of "pinned" open cursors.
metrics.cursor.open.total
The number of cursors that MongoDB is maintaining for clients. Because MongoDB exhausts unused cursors, typically this value small or zero. However, if there is a queue, or stale tailable cursors, or a large number of operations this value may increase.
metrics.cursor.open.singleTarget
The total number of cursors that only target a single shard. Only mongos instances report metrics.cursor.open.singleTarget values.
metrics.cursor.open.multiTarget
The total number of cursors that only target more than one shard. Only mongos instances report metrics.cursor.open.multiTarget values.

mirroredReads

Available on mongod only.

"mirroredReads" : {
"seen" : <num>,
"sent" : <num>
},
mirroredReads

Available on mongod only.

A document that reports on mirrored reads. To return mirroredReads information, you must explicitly specify the inclusion:

db.runCommand( { serverStatus: 1, mirroredReads: 1 } )
mirroredReads.processedAsSecondary

New in version 6.2.在版本6.2中新增。

The number of mirrored reads processed by this member while a secondary.

Tip

mirrorReads Parameter

mirroredReads.seen

The number of operations that support mirroring received by this member.

Tip

mirrorReads Parameter

mirroredReads.sent

The number of mirrored reads sent by this member when primary. For example, if a read is mirrored and sent to two secondaries, the number of mirrored reads is 2.

Tip

mirrorReads Parameter

network

network : {
egress : {
bytesIn : Long("<num>"),
bytesOut : Long("<num>"),
physicalBytesIn : Long("<num>"),
physicalBytesOut : Long("<num>"),
numRequests : Long("<num>"),
}
bytesIn : Long("<num>"),
bytesOut : Long("<num>"),
physicalBytesIn : Long("<num>"),
physicalBytesOut : Long("<num>"),
numSlowDNSOperations : Long("<num>"),
numSlowSSLOperations : Long("<num>"),
numRequests : Long("<num>"),
tcpFastOpen : {
kernelSetting : Long("<num>"),
serverSupported : <bool>,
clientSupported : <bool>,
accepted : Long("<num>")
},
compression : {
snappy : {
compressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") },
decompressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") }
},
zstd : {
compressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") },
decompressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") }
},
zlib : {
compressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") },
decompressor : { bytesIn : Long("<num>"), bytesOut : Long("<num>") }
}
},
serviceExecutors : {
passthrough : {
threadsRunning : <num>,
clientsInTotal : <num>,
clientsRunning : <num>,
clientsWaitingForData : <num>
},
fixed : {
threadsRunning : <num>,
clientsInTotal : <num>,
clientsRunning : <num>,
clientsWaitingForData : <num>
}
},
listenerProcessingTime : { durationMicros : <num> } // Added in MongoDB 6.3
}
network
A document that reports data on MongoDB's network use. These statistics measure ingress and egress connections, specifically the traffic seen by the mongod or mongos over network connections initiated by clients or other mongod or mongos instances.
network.egress
Reports data on the traffic from egress connections initiated by this mongod or mongos instance. In most cases, the egress connections are mongos communicating with mongod for sharding, or mongod communicating with mongod for replication. It is also possible that mongod or mongos are communicating with external services, such as mongot.
network.egress.bytesIn
The total number of logical bytes that mongod or mongos instances have received over network connections that they have initiated to other nodes/services. Logical bytes are the exact number of bytes that a given file contains.
network.egress.bytesOut
The total number of logical bytes that mongod or mongos instances have sent over network connections that they have initiated to other nodes/services. Logical bytes correspond to the number of bytes that a given file contains.
network.egress.physicalBytesIn
The total number of physical bytes that mongod or mongos instances has received over network connections that they have initiated to other nodes/services. Physical bytes are the number of bytes that actually reside on disk.
network.egress.physicalBytesOut
The total number of physical bytes that mongod or mongos instances have sent over network connections that they have initiated to other nodes/services. Physical bytes are the number of bytes that actually reside on disk.
network.egress.numRequests
The total number of distinct requests that mongod or mongos have sent and received responses to. Use this value to provide context for the network.egress.bytesIn and network.egress.bytesOut values to ensure that MongoDB's network utilization is consistent with expectations and application use.
network.bytesIn
The total number of logical bytes that the server has received over network connections initiated by clients or other mongod or mongos instances. Logical bytes are the exact number of bytes that a given file contains.
network.bytesOut
The total number of logical bytes that the server has sent over network connections initiated by clients or other mongod or mongos instances. Logical bytes correspond to the number of bytes that a given file contains.
network.physicalBytesIn
The total number of physical bytes that the server has received over network connections initiated by clients or other mongod or mongos instances. Physical bytes are the number of bytes that actually reside on disk.
network.physicalBytesOut
The total number of physical bytes that the server has sent over network connections initiated by clients or other mongod or mongos instances. Physical bytes are the number of bytes that actually reside on disk.
network.numSlowDNSOperations
The total number of DNS resolution operations which took longer than 1 second.
network.numSlowSSLOperations
The total number of SSL handshake operations which took longer than 1 second.
network.numRequests
The total number of distinct requests that the server has received. Use this value to provide context for the network.bytesIn and network.bytesOut values to ensure that MongoDB's network utilization is consistent with expectations and application use.
network.tcpFastOpen
A document that reports data on MongoDB's support and use of TCP Fast Open (TFO) connections.
network.tcpFastOpen.kernelSetting

Linux only

Returns the value of /proc/sys/net/ipv4/tcp_fastopen:

  • 0 - TCP Fast Open is disabled on the system.
  • 1 - TCP Fast Open is enabled for outgoing connections.
  • 2 - TCP Fast Open is enabled for incoming connections.
  • 3 - TCP Fast Open is enabled for incoming and outgoing connections.
network.tcpFastOpen.serverSupported
  • Returns true if the host operating system supports inbound TCP Fast Open (TFO) connections.
  • Returns false if the host operating system does not support inbound TCP Fast Open (TFO) connections.
network.tcpFastOpen.clientSupported
  • Returns true if the host operating system supports outbound TCP Fast Open (TFO) connections.
  • Returns false if the host operating system does not support outbound TCP Fast Open (TFO) connections.
network.tcpFastOpen.accepted
The total number of accepted incoming TCP Fast Open (TFO) connections to the mongod or mongos since the mongod or mongos last started.
network.compression
A document that reports on the amount of data compressed and decompressed by each network compressor library.
network.compression.snappy
A document that returns statistics on the number of bytes that have been compressed and decompressed with the snappy library.
network.compression.zstd
A document that returns statistics on the number of bytes that have been compressed and decompressed with the zstd library.
network.compression.zlib
A document that returns statistics on the number of bytes that have been compressed and decompressed with the zlib library.
network.serviceExecutors

New in version 5.0.在版本5.0中新增。

A document that reports data on the service executors, which run operations for client requests.

network.serviceExecutors.passthrough

New in version 5.0.在版本5.0中新增。

A document that reports data about the threads and clients for the passthrough service executor. The passthrough service executor creates a new thread for each client and destroys the thread after the client ends.

network.serviceExecutors.passthrough.threadsRunning

New in version 5.0.在版本5.0中新增。

Number of threads running in the passthrough service executor.

network.serviceExecutors.passthrough.clientsInTotal

New in version 5.0.在版本5.0中新增。

Total number of clients allocated to the passthrough service executor. A client can be allocated to the passthrough service executor and not currently running requests.

network.serviceExecutors.passthrough.clientsRunning

New in version 5.0.在版本5.0中新增。

Number of clients currently using the passthrough service executor to run requests.

network.serviceExecutors.passthrough.clientsWaitingForData

New in version 5.0.在版本5.0中新增。

Number of clients using the passthrough service executor that are waiting for incoming data from the network.

network.serviceExecutors.fixed

New in version 5.0.在版本5.0中新增。

A document that reports data about the threads and clients for the fixed service executor. The fixed service executor has a fixed number of threads. A thread is temporarily assigned to a client and the thread is preserved after the client ends.

network.serviceExecutors.fixed.threadsRunning

New in version 5.0.在版本5.0中新增。

Number of threads running in the fixed service executor.

network.serviceExecutors.fixed.clientsInTotal

New in version 5.0.在版本5.0中新增。

Total number of clients allocated to the fixed service executor. A client can be allocated to the fixed service executor and not currently running requests.

network.serviceExecutors.fixed.clientsRunning

New in version 5.0.在版本5.0中新增。

Number of clients currently using the fixed service executor to run requests.

network.serviceExecutors.fixed.clientsWaitingForData

New in version 5.0.在版本5.0中新增。

Number of clients using the fixed service executor that are waiting for incoming data from the network.

network.listenerProcessingTime

New in version 6.3.在版本6.3中新增。

A document that reports the total time the database listener spends allocating incoming database connection requests to dedicated threads.

network.listenerProcessingTime.durationMicros

New in version 6.3.在版本6.3中新增。

Total time in microseconds the database listener spends allocating incoming database connection requests to dedicated threads that perform database operations.

opLatencies

opLatencies : {
reads : <document>,
writes : <document>,
commands : <document>,
transactions : <document>
},
opLatencies

A document containing operation latencies for the instance as a whole. See latencyStats Document for a description of this document.

Starting in MongoDB 6.2, the opLatencies metric reports for both mongod and mongos instances. Latencies reported by mongos include operation latency time and communication time between the mongod and mongos instances.

To include the histogram in the opLatencies output, run the following command:

db.runCommand( { serverStatus: 1, opLatencies: { histograms: true } } ).opLatencies
opLatencies.reads
Latency statistics for read requests.
opLatencies.writes
Latency statistics for write operations.
opLatencies.commands
Latency statistics for database commands.
opLatencies.transactions
Latency statistics for database transactions.

opWorkingTime

opWorkingTime : {
commands : <document>,
reads : <document>,
writes : <document>,
transactions : <document>
}
opWorkingTime

Document that includes information on operation execution for the instance. See latencyStats Document for a description of this document.

The fields under opWorkingTime are measured in workingMillis, which is the amount of time that MongoDB spends working on that operation. This means that factors such as waiting for locks and flow control don't affect opWorkingTime.

To include the histogram in the opWorkingTime output, run the following command:

db.runCommand( { serverStatus: 1, opWorkingTime: { histogram: true } } ).opWorkingTime

New in version 8.0.在版本8.0中新增。

opWorkingTime.commands

Document that reports execution statistics for database commands.

New in version 8.0.在版本8.0中新增。

opWorkingTime.reads

Document that reports execution statistics for read operations.

New in version 8.0.在版本8.0中新增。

opWorkingTime.writes

Document that reports execution statistics for write operations.

New in version 8.0.在版本8.0中新增。

opWorkingTime.transactions

Document that reports execution statistics for transactions.

New in version 8.0.在版本8.0中新增。

opReadConcernCounters

Warning

Removed

Starting in version 5.0, opReadConcernCounters is replaced by readConcernCounters.

Only for mongod instances

opReadConcernCounters : {
available : Long("<num>"),
linearizable : Long("<num>"),
local : Long("<num>"),
majority : Long("<num>"),
snapshot : Long("<num>"),
none : Long("<num>")
}
opReadConcernCounters

Removed in version 5.0. Replaced by readConcernCounters.

A document that reports on the read concern level specified by query operations to the mongod instance since it last started.

Specified wDescription描述
"available"Number of query operations that specified read concern level "available".
"linearizable"Number of query operations that specified read concern level "linearizable".
"local"Number of query operations that specified read concern level "local".
"majority"Number of query operations that specified read concern level "majority".
"snapshot"Number of query operations that specified read concern level "snapshot".
"none"Number of query operations that did not specify a read concern level and instead used the default read concern level.

The sum of the opReadConcernCounters equals opcounters.query.

opWriteConcernCounters

Only for mongod instances

opWriteConcernCounters : {
insert : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
},
none : Long("<num>"),
noneInfo : {
CWWC : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
}
},
implicitDefault : {
wmajority : Long("<num>")
wnum : {
<num> : Long("<num>"),
...
}
}
}
},
update : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
},
none : Long("<num>"),
noneInfo : {
CWWC : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
}
wtag : {
<tag1> : Long("<num>"),
...
}
},
implicitDefault : {
wmajority : Long("<num>")
wnum : {
<num> : Long("<num>"),
...
}
}
}
},
delete : {
wmajority : Long("<num>")
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
},
none : Long("<num>"),
noneInfo : {
CWWC : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
}
},
implicitDefault : {
wmajority : Long("<num>")
wnum : {
<num> : Long("<num>"),
...
}
}
}
}
}
opWriteConcernCounters

A document that reports on the write concerns specified by write operations to the mongod instance since it last started.

More specifically, the opWriteConcernCounters reports on the w: <value> specified by the write operations. The journal flag option (j) and the timeout option (wtimeout) of the write concerns does not affect the count. The count is incremented even if the operation times out.

Note

Only available when reportOpWriteConcernCountersInServerStatus parameter is set to true (false by default).

opWriteConcernCounters.insert

A document that reports on the w: <value> specified by insert operations to the mongod instance since it last started:

Note

Only available when reportOpWriteConcernCountersInServerStatus parameter is set to true (false by default).

insert : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
},
none : Long("<num>"),
noneInfo : {
CWWC : {
wmajority : Long("<num>"),
wnum : {},
wtag : {}
},
implicitDefault : {
wmajority : Long("<num>")
wnum : {}
}
}
},
Specified wDescription描述
"wmajority"Number of insert operations that specified w: "majority".
"wnum"Number of insert operations that specified w: <num>. The counts are grouped by the specific``<num>``.
"wtag"Number of insert operations that specified w: <tag>. The counts are grouped by the specific <tag>.
"none"Number of insert operations that did not specify w value. These operations use the default w value of "majority".
"noneInfo"

Number of non-transaction query operations that use default write concerns. The metrics track usage of the cluster wide write concern (the global default write concern) and the implicit-default write concern.

The sum of the values in opWriteConcernCounters.noneInfo should equal the value of opWriteConcernCounters.none.

The sum of the opWriteConcernCounters.insert equals opcounters.insert.

opWriteConcernCounters.update

A document that reports on the w: <value> specified by update operations to the mongod instance since it last started:

Note

Only available when reportOpWriteConcernCountersInServerStatus parameter is set to true (false by default).

update : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
},
none : Long("<num>"),
noneInfo : {
CWWC : {
wmajority : Long("<num>"),
wnum : {},
wtag : {}
},
implicitDefault : {
wmajority : Long("<num>")
wnum : {}
}
}
},
Specified wDescription描述
"wmajority"Number of update operations that specified w: "majority".
"wnum"Number of update operations that specified w: <num>. The counts are grouped by the specific <num>.
"wtag"Number of update operations that specified w: <tag>. The counts are grouped by the specific <tag>.
"none"Number of update operations that did not specify w value. These operations use the default w value of 1.
"noneInfo"

Number of non-transaction query operations that use default write concerns. The metrics track usage of the cluster wide write concern (the global default write concern) and the implicit-default write concern.

The sum of the values in opWriteConcernCounters.noneInfo should equal the value of opWriteConcernCounters.none.

The sum of the opWriteConcernCounters.update equals opcounters.update.

opWriteConcernCounters.delete

A document that reports on the w: <value> specified by delete operations to the mongod instance since it last started:

Note

Only available when reportOpWriteConcernCountersInServerStatus parameter is set to true (false by default).

delete : {
wmajority : Long("<num>"),
wnum : {
<num> : Long("<num>"),
...
},
wtag : {
<tag1> : Long("<num>"),
...
},
none : Long("<num>"),
noneInfo : {
CWWC : {
wmajority : Long("<num>"),
wnum : {},
wtag : {}
},
implicitDefault : {
wmajority : Long("<num>")
wnum : {}
}
}
}
Specified wDescription描述
"wmajority"Number of delete operations that specified w: "majority".
"wnum"Number of delete operations that specified w: <num>. The counts are grouped by the specific <num>.
"wtag"Number of delete operations that specified w: <tag>. The counts are grouped by the specific <tag>.
"none"Number of delete operations that did not specify w value. These operations use the default w value of 1.
"noneInfo"

Number of non-transaction query operations that use default write concerns. The metrics track usage of the cluster wide write concern (the global default write concern) and the implicit-default write concern.

The sum of the values in opWriteConcernCounters.noneInfo should equal the value of opWriteConcernCounters.none.

The sum of the opWriteConcernCounters.delete equals opcounters.delete.

opcounters

opcounters : {
insert : Long("<num>"),
query : Long("<num>"),
update : Long("<num>"),
delete : Long("<num>"),
getmore : Long("<num>"),
command : Long("<num>"),
},
opcounters

A document that reports on database operations by type since the mongod instance last started.

These numbers will grow over time until next restart. Analyze these values over time to track database utilization.

Note

The data in opcounters treats operations that affect multiple documents, such as bulk insert or multi-update operations, as a single operation. See metrics.document for more granular document-level operation tracking.

Additionally, these values reflect received operations, and increment even when operations are not successful.

opcounters.insert
The total number of insert operations received since the mongod instance last started.
opcounters.query
The total number of queries received since the mongod instance last started. Starting in MongoDB 7.1, aggregations count as query operations and increment this value.
opcounters.update
The total number of update operations received since the mongod instance last started.
opcounters.delete
The total number of delete operations since the mongod instance last started.
opcounters.getmore
The total number of getMore operations since the mongod instance last started. This counter can be high even if the query count is low. Secondary nodes send getMore operations as part of the replication process.
opcounters.command

The total number of commands issued to the database since the mongod instance last started.

opcounters.command counts all commands except the following:

opcounters.deprecated

opQuery counts the number of requests for opcodes that are deprecated in MongoDB 5.0 but are temporarily supported. This section only appears in the db.serverStatus() output when a deprecated opcode has been used.

The counter is reset when mongod starts.

deprecated: {
opQuery: Long("<num>"),
}

opcountersRepl

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl : {
insert : Long("<num>"),
query : Long("<num>"),
update : Long("<num>"),
delete : Long("<num>"),
getmore : Long("<num>"),
command : Long("<num>"),
},
opcountersRepl

A document that reports on database replication operations by type since the mongod instance last started.

These values only appear when the current host is a member of a replica set.

These values will differ from the opcounters values because of how MongoDB serializes operations during replication. See Replication for more information on replication.

These numbers will grow over time in response to database use until next restart. Analyze these values over time to track database utilization.

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl.insert

The total number of replicated insert operations since the mongod instance last started.

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl.query

The total number of replicated queries since the mongod instance last started.

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl.update

The total number of replicated update operations since the mongod instance last started.

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl.delete

The total number of replicated delete operations since the mongod instance last started.

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl.getmore

The total number of getMore operations since the mongod instance last started. This counter can be high even if the query count is low. Secondary nodes send getMore operations as part of the replication process.

The returned opcountersRepl.* values are type NumberLong.

opcountersRepl.command

The total number of replicated commands issued to the database since the mongod instance last started.

The returned opcountersRepl.* values are type NumberLong.

oplogTruncation

oplogTruncation : {
totalTimeProcessingMicros : Long("<num>"),
processingMethod : <string>,
oplogMinRetentionHours : <double>
totalTimeTruncatingMicros : Long("<num>"),
truncateCount : Long("<num>")
},
oplogTruncation

A document that reports on oplog truncations.

The field only appears when the current instance is a member of a replica set and uses either the WiredTiger Storage Engine or In-Memory Storage Engine for Self-Managed Deployments.

Available in the WiredTiger Storage Engine.

oplogTruncation.totalTimeProcessingMicros

The total time taken, in microseconds, to scan or sample the oplog to determine the oplog truncation points.

totalTimeProcessingMicros is only meaningful if the mongod instance started on existing data files (i.e. not meaningful for In-Memory Storage Engine for Self-Managed Deployments).

See oplogTruncation.processingMethod

Available in the WiredTiger Storage Engine.

oplogTruncation.processingMethod

The method used at start up to determine the oplog truncation points. The value can be either "sampling" or "scanning".

processingMethod is only meaningful if the mongod instance started on existing data files (i.e. not meaningful for In-Memory Storage Engine for Self-Managed Deployments).

Available in the WiredTiger Storage Engine.

oplogTruncation.oplogMinRetentionHours

The minimum retention period for the oplog in hours. If the oplog has exceeded the oplog size, the mongod only truncates oplog entries older than the configured retention value.

Only visible if the mongod is a member of a replica set and:

oplogTruncation.totalTimeTruncatingMicros

The cumulative time spent, in microseconds, performing oplog truncations.

Available in the WiredTiger Storage Engine.

oplogTruncation.truncateCount

The cumulative number of oplog truncations.

Available in the WiredTiger Storage Engine.

planCache

New in version 7.0.在版本7.0中新增。

planCache : {
totalQueryShapes : Long("<num>"),
totalSizeEstimateBytes : Long("<num>"),
classic : {
hits : Long("<num>"),
misses : Long("<num>"),
replanned : Long("<num>"),
replanned_plan_is_cached_plan : Long("<num>"),
skipped : Long("<num>")
},
sbe : {
hits : Long("<num>"),
misses: Long("<num>"),
replanned : Long("<num>"),
replanned_plan_is_cached_plan : Long("<num>"),
skipped : Long("<num>")
}
}
planCache
A document that reports query plan cache statistics.
planCache.totalQueryShapes

Approximate number of plan cache query shapes

Prior to version 7.2, information on the number of plan cache query shapes was stored in the query.planCacheTotalQueryShapes field.

New in version 7.2.在版本7.2中新增。

planCache.totalSizeEstimateBytes

Total size of the plan cache in bytes.

Prior to version 7.2, information on the plan cache size was stored in the query.planCacheTotalSizeEstimateBytes field.

New in version 7.2.在版本7.2中新增。

planCache.classic.hits
Number of classic execution engine query plans found in the query cache and reused to avoid the query planning phase.
planCache.classic.misses
Number of classic execution engine query plans which were not found in the query cache and went through the query planning phase.
planCache.classic.replanned

Number of classic execution engine query plans that were discarded and re-optimized.

New in version 8.0.在版本8.0中新增。 (Also available in 7.0.22)

planCache.classic.replanned_plan_is_cached_plan

Number of times the server performed a replan operation for the classic execution engine that produced a plan identical to one already in the query cache.

New in version 8.2.在版本8.2中新增。

planCache.classic.skipped

Number of classic execution engine query plans that were not found in the query cache because the query is ineligible for caching.

New in version 7.3.在版本7.3中新增。

planCache.sbe.hits
Number of slot-based execution engine query plans found in the query cache and reused to avoid the query planning phase.
planCache.sbe.misses
Number of slot-based execution engine plans which were not found in the query cache and went through the query planning phase.
planCache.sbe.replanned

Number of slot-based execution engine query plans that were discarded and re-optimized.

New in version 8.0.在版本8.0中新增。 (Also available in 7.0.22)

planCache.sbe.replanned_plan_is_cached_plan

Number of times the server performed a replan operation for the slot-based execution engine that produced a plan identical to one already in the query cache.

New in version 8.2.在版本8.2中新增。

planCache.sbe.skipped

Number of slot-based execution engine query plans that were not found in the query cache because the query is ineligible for caching.

New in version 7.3.在版本7.3中新增。

profiler

profiler: {
totalWrites: <integer>,
activeWriters: <integer>
}
profiler.totalWrites
Total number of writes to profile collections on all databases.
profiler.activeWriters
The instantaneous number of operations writing to a profile collection on all databases.

queryStats

New in version 7.1.在版本7.1中新增。

queryStats: {
numEvicted: Long("<num>"),
numRateLimitedRequests: Long("<num>"),
queryStatsStoreSizeEstimateBytes: Long("<num>"),
numQueryStatsStoreWriteErrors: Long("<num>"),
numHmacApplicationErrors: Long("<num>")
},
queryStats
A document that contains metrics for the $queryStats aggregation stage.
queryStats.numEvicted
Number of queries that the $queryStats virtual collection has evicted due to space contraints.
queryStats.numRateLimitedRequests
Number of times that query stats were not recorded for a query due to rate limiting.
queryStats.queryStatsStoreSizeEstimateBytes
Current estimated size of objects in the $queryStats virtual collection.
queryStats.numQueryStatsStoreWriteErrors
Number of times this MongoDB process failed to store a new query stats key. Generally, these failures happen when the $queryStats virtual collection runs out of space.
queryStats.numHmacApplicationErrors
Number of times this MongoDB process failed to compute a one-way tokenized query stats key when $queryStats was called with the transformIdentifiers option.

queryAnalyzers

New in version 7.0.在版本7.0中新增。

queryAnalyzers: {
activeCollections: <integer>,
totalCollections: <integer>,
totalSampledReadsCount: <integer>,
totalSampledWritesCount: <integer>,
totalSampledReadsBytes: <integer>,
totalSampledWritesBytes: <integer>
}
queryAnalyzers.activeCollections
Number of collections the query analyzer actively samples.
queryAnalyzers.totalCollections
Total number of sampled collections.
queryAnalyzers.totalSampledReadsCount
Total number of sampled read queries.
queryAnalyzers.totalSampledWritesCount
Total number of sampled write queries.
queryAnalyzers.totalSampledReadsBytes
Total size of sampled read queries, in bytes. This metric is only available when running serverStatus on mongod.
queryAnalyzers.totalSampledWritesBytes
Total size of sampled write queries, in bytes. This metric is only available when running serverStatus on mongod.

queues

As an operation proceeds through its stages, it may enter a queue if the number of concurrent operations at the current stage exceeds a maximum threshold. This prevents excessive resource contention and provides observability into the state of the database.

New in version 8.0.在版本8.0中新增。

queues: {
execution: {
write: {
out: Long("<num>"),
available: Long("<num>"),
totalTickets: Long("<num>"),
exempt: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
},
normalPriority: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
}
},
read: {
out: Long("<num>"),
available: Long("<num>"),
totalTickets: Long("<num>"),
exempt: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
},
normalPriority: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
}
},
monitor: {
timesDecreased: Long("<num>"),
timesIncreased: Long("<num>"),
totalAmountDecreased: Long("<num>"),
totalAmountIncreased: Long("<num>"),
resizeDurationMicros: Long("<num>")
}
},
ingress: {
out: Long("<num>"),
available: Long("<num>"),
totalTickets: Long("<num>"),
exempt: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
},
normalPriority: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
}
},
ingressSessionEstablishment: { // Added in MongoDB 8.2
"addedToQueue": Long("<num>"),
"removedFromQueue": Long("<num>"),
"interruptedInQueue": Long("<num>")
"rejectedAdmissions": Long("<num>"),
"exemptedAdmissions": Long("<num>"),
"successfulAdmissions": Long("<num>"),
"attemptedAdmissions": Long("<num>"),
"averageTimeQueuedMicros": Long("<num>"),
"totalAvailableTokens": Long("<num>")
}
}
queues.execution

New in version 8.0.在版本8.0中新增。

A document that returns monitoring and queue information for operations waiting to be scheduled for execution within the storage layer (concurrent transactions).

These settings are MongoDB-specific. To change the settings for concurrent reads and write transactions (read and write tickets), see storageEngineConcurrentReadTransactions and storageEngineConcurrentWriteTransactions.

Important

Starting in version 7.0, MongoDB uses a default algorithm to dynamically adjust the maximum number of concurrent storage engine transactions (including both read and write tickets) to optimize database throughput during overload.

The following table summarizes how to identify overload scenarios for MongoDB post-7.0 and for earlier releases:

VersionDiagnosing Overload Scenarios
7.0 and later

A large number of queued operations that persists for a prolonged period of time likely indicates an overload.

A concurrent storage engine transaction (ticket) availability of 0 for a prolonged period of time does not indicate an overload.

6.0 and earlier

A large number of queued operations that persists for a prolonged period of time likely indicates an overload.

A concurrent storage engine transaction (ticket) availibility of 0 for a prolonged period of time likely indicates an overload.

queues.execution.write
A document that returns Queue Information for concurrent read transactions (read tickets) allowed into the WiredTiger storage engine.
queues.execution.read
A document that returns Queue Information for concurrent write transactions (write tickets) allowed into the WiredTiger storage engine.
queues.execution.monitor

A document that returns monitoring metrics for adjustments that the system has made to the number of allowed concurrent transactions (tickets).

queues.execution.monitor.timesDecreased

The number of times the queue size was decreased.

queues.execution.monitor.timesIncreased

The number of times the queue size was increased.

queues.execution.monitor.totalAmountDecreased

The total amount of operations the queue decreased by.

queues.execution.monitor.totalAmountIncreased

The total number of operations the queue increased by.

queues.execution.monitor.resizeDurationMicros

The cumulative time in milliseconds that the system spent resizing the queue.

queues.ingress

New in version 8.0.在版本8.0中新增。

A document that returns Queue Information for ingress admission control. Use these values to protect and mitigate against resource overload by limiting the number of operations waiting for entry to the database from the network.

The maximum number of allowed concurrent operations is constrained by ingressAdmissionControllerTicketPoolSize.

queues.ingressSessionEstablishment

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

A document that contains information about the ingress session establishment queue. This includes metrics related to connections established and processed through the connection establishment rate limiter.

queues.ingressSessionEstablishment.addedToQueue

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections that the server adds to the ingress session establishment queue. This metric tracks connections that are processed through the rate limiter queue when the rate limiter is enabled.

queues.ingressSessionEstablishment.removedFromQueue

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections that the server removes from the ingress session establishment queue after acquiring a connection establishment token. This metric tracks connections that have completed their wait in the rate limiter queue.

queues.ingressSessionEstablishment.interruptedInQueue

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connections that halt while waiting in the queue, typically due to client disconnects or server shutdown.

queues.ingressSessionEstablishment.rejectedAdmissions

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connection attempts that the server rejects because the queue depth exceeded the ingressConnectionEstablishmentMaxQueueDepth limit. When this happens, the server immediately closes the connection rather than queuing it.

queues.ingressSessionEstablishment.exemptedAdmissions

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The number of incoming connection attempts that bypass the rate limiter due to being on the ingressConnectionEstablishmentRateLimiterBypass list. Connections from IP addresses or CIDR ranges specified in ingressConnectionEstablishmentRateLimiterBypass are not subject to rate limiting.

queues.ingressSessionEstablishment.successfulAdmissions

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The total number of incoming connection attempts that the rate limiter successfully processes, either immediately or after waiting in the queue.

queues.ingressSessionEstablishment.attemptedAdmissions

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The total number of incoming connection attempts on the rate limiter.

queues.ingressSessionEstablishment.averageTimeQueuedMicros

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The average time in microseconds that connections spend waiting in the queue before the server processes them. This metric uses an exponentially-weighted moving average formula and can be used to tune the ingressConnectionEstablishmentMaxQueueDepth. The value roughly equals (maxQueueDepth / establishRatePerSec) * 1e6.

queues.ingressSessionEstablishment.totalAvailableTokens

New in version 8.2.在版本8.2中新增。 (also available in 8.1.1, 8.0.12, and 7.0.23)

The current number of available tokens in the token bucket. This represents the capacity for immediately processing new connections without queuing. When this value is 0, new connections must wait in the queue, or are rejected if the queue is full.

Queue Information

out: Long("<num>"),
available: Long("<num>"),
totalTickets: Long("<num>"),
exempt: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
},
normalPriority: {
addedToQueue: Long("<num>"),
removedFromQueue: Long("<num>"),
queueLength: Long("<num>"),
startedProcessing: Long("<num>"),
processing: Long("<num>"),
finishedProcessing: Long("<num>"),
totalTimeProcessingMicros: Long("<num>"),
canceled: Long("<num>"),
newAdmissions: Long("<num>"),
totalTimeQueuedMicros: Long("<num>")
}
out
The number of currently held tickets.
available
The number of available tickets.
totalTickets
The size of the ticket pool.
exempt
A document that returns metrics for operations exempt from the queue.
exempt.startedProcessing
The total number of operations that acquired an admission ticket.
exempt.processing
The total number of operations that are currently being processed.
exempt.finishedProcessing
The total number of operations that released their admission ticket.
exempt.totalTimeProcessingMicros
The total time operations held their admission tickets.
exempt.canceled
The total number of operations that timed out in the queue.
exempt.newAdmissions
The total number of new admissions to the queue.
exempt.totalTimeQueuedMicros
The total time operations spent waiting on the queue.
normalPriority
A document that returns metrics for operations subject to the queue.
normalPriority.addedToQueue
The total number of operations added to the queue.
normalPriority.removedFromQueue
The total number of operations removed from the queue.
normalPriority.queueLength
The total number of operations in the queue.
normalPriority.startedProcessing
The total number of operations that acquired an admission ticket.
normalPriority.processing
The total number of operations that are currently being processed.
normalPriority.finishedProcessing
The total number of operations that released their admission ticket.
normalPriority.totalTimeProcessingMicros
The total time operations held their admission tickets.
normalPriority.canceled
The total number of operations that timed out in the queue.
normalPriority.newAdmissions
The total number of new admissions to the queue.
normalPriority.totalTimeQueuedMicros
The total time operations spent waiting on the queue.

querySettings

New in version 8.0.在版本8.0中新增。

querySettings: {
count: <num>,
rejectCount: <num>,
size: <num>
}
querySettings

Document with configuration counts and usage for query settings.

Starting in MongoDB 8.0, use query settings instead of adding index filters. Index filters are deprecated starting in MongoDB 8.0.

Query settings have more functionality than index filters. Also, index filters aren't persistent and you cannot easily create index filters for all cluster nodes. To add query settings and explore examples, see setQuerySettings.

querySettings.count
Total number of query settings.
querySettings.rejectCount
Total number of query settings that have the reject field set to true. To set the reject field, use setQuerySettings.
querySettings.size
Total size in bytes for query settings.

readConcernCounters

New in version 5.0.在版本5.0中新增。

readConcernCounters : {
nonTransactionOps : {
none : Long("<num>"),
noneInfo : {
CWRC : {
local : Long("<num>"),
available : Long("<num>"),
majority : Long("<num>")
},
implicitDefault : {
local : Long("<num>"),
available : Long("<num>")
}
},
local : Long("<num>"),
available : Long("<num>"),
majority : Long("<num>"),
snapshot : {
withClusterTime : Long("<num>"),
withoutClusterTime : Long("<num>")
},
linearizable : Long("<num>")
},
transactionOps : {
none : Long("<num>"),
noneInfo : {
CWRC : {
local : Long("<num>"),
available : Long("<num>"),
majority : Long("<num>")
},
implicitDefault : {
local : Long("<num>"),
available : Long("<num>")
}
},
local : Long("<num>"),
majority : Long("<num>"),
snapshot : {
withClusterTime : Long("<num>"),
withoutClusterTime : Long("<num>")
}
}
},
readConcernCounters
A document that reports on the read concern level specified by query operations. This document contains the readConcernCounters.nonTransactionOps and readConcernCounters.transactionOps documents.
readConcernCounters.nonTransactionOps
A document that reports on the read concern level specified by non-transaction query operations performed after the database server last started.
readConcernCounters.nonTransactionOps.none

Number of non-transaction query operations that did not specify a read concern level and instead used either:

readConcernCounters.nonTransactionOps.noneInfo

The number of non-transaction query operations that use the global default read concern and an implicit-default read concern.

The sum of the values in readConcernCounters.nonTransactionOps.noneInfo should equal the value of readConcernCounters.nonTransactionOps.none.

readConcernCounters.nonTransactionOps.local
Number of non-transaction query operations that specified the "local" read concern level.
readConcernCounters.nonTransactionOps.available
Number of non-transaction query operations that specified the "available" read concern level.
readConcernCounters.nonTransactionOps.majority
Number of non-transaction query operations that specified the "majority" read concern level.
readConcernCounters.nonTransactionOps.snapshot
Document containing non-transaction query operations that specified the "snapshot" read concern level.
readConcernCounters.nonTransactionOps.snapshot.withClusterTime
Number of non-transaction query operations that specified the "snapshot" read concern level and the cluster time, which specified a point in time.
readConcernCounters.nonTransactionOps.snapshot.withoutClusterTime
Number of non-transaction query operations that specified the "snapshot" read concern level without the cluster time, which means a point in time was omitted and the server will read the most recently committed snapshot available to the node.
readConcernCounters.nonTransactionOps.linearizable
Number of non-transaction query operations that specified the "linearizable" read concern level.
readConcernCounters.transactionOps
A document that reports on the read concern level specified by transaction query operations performed after the database server last started.
readConcernCounters.transactionOps.none
Number of transaction query operations that did not specify a read concern level and instead used the default read concern level or the global default read or write concern configuration added with the setDefaultRWConcern command.
readConcernCounters.transactionOps.noneInfo
Information about the global default read concern and implicit-default read concern used by transaction query operations.
readConcernCounters.transactionOps.local
Number of transaction query operations that specified the "local" read concern level.
readConcernCounters.transactionOps.available
Number of transaction query operations that specified the "available" read concern level.
readConcernCounters.transactionOps.majority
Number of transaction query operations that specified the "majority" read concern level.
readConcernCounters.transactionOps.snapshot
Document containing transaction query operations that specified the "snapshot" read concern level.
readConcernCounters.transactionOps.snapshot.withClusterTime
Number of transaction query operations that specified the "snapshot" read concern level and the cluster time, which specified a point in time.
readConcernCounters.transactionOps.snapshot.withoutClusterTime
Number of transaction query operations that specified the "snapshot" read concern level without the cluster time, which means a point in time was omitted and the server will read the most recently committed snapshot available to the node.

readPreferenceCounters

Available starting in MongoDB 7.2 (and 7.0.3, 6.0.11).

Available on mongod only.

readPreferenceCounters : {
executedOnPrimary : {
primary : {
internal : Long("<num>"),
external : Long("<num>")
},
primaryPreferred : {
internal : Long("<num>"),
external : Long("<num>")
},
secondary : {
internal : Long("<num>"),
external : Long("<num>")
},
secondaryPreferred : {
internal : Long("<num>"),
external : Long("<num>")
},
nearest : {
internal : Long("<num>"),
external : Long("<num>")
},
tagged : {
internal : Long("<num>"),
external : Long("<num>")
}
},
executedOnSecondary : {
primary : {
internal : Long("<num>"),
external : Long("<num>")
},
primaryPreferred : {
internal : Long("<num>"),
external : Long("<num>")
},
secondary : {
internal : Long("<num>"),
external : Long("<num>")
},
secondaryPreferred : {
internal : Long("<num>"),
external : Long("<num>")
},
nearest : {
internal : Long("<num>"),
external : Long("<num>")
},
tagged : {
internal : Long("<num>"),
external : Long("<num>")
}
}
}
readPreferenceCounters

Available on mongod only.

A document that reports the number of operations received by this mongod node with the specified read preference.

The tagged sub-field refers to any read preference passed in with a tag.

readPreferenceCounters.executedOnPrimary

Available on mongod only.

A document that counts how many internal and external read preference operations the node received while serving as the primary.

readPreferenceCounters.executedOnSecondary

Available on mongod only.

A document that counts how many internal and external read preference operations the node received while serving as a secondary.

repl

repl : {
hosts : [
<string>,
<string>,
<string>
],
setName : <string>,
setVersion : <num>,
isWritablePrimary : <boolean>,
secondary : <boolean>,
primary : <hostname>,
me : <hostname>,
electionId : ObjectId(""),
userWriteBlockReason : <num>,
userWriteBlockModeCounters: {
Unspecified: <num>,
ClusterToClusterMigrationInProgress: <num>,
DiskUseThresholdExceeded: <num>
},
primaryOnlyServices: {
ReshardingRecipientService: { state: <string>, numInstances: <num> },
RenameCollectionParticipantService: { state: <string>, numInstances: <num> },
ShardingDDLCoordinator: { state: <string>, numInstances: <num> },
ReshardingDonorService: { state: <string>, numInstances: <num> }
},
rbid : <num>,
replicationProgress : [
{
rid : <ObjectId>,
optime : { ts: <timestamp>, term: <num> },
host : <hostname>,
memberId : <num>
},
...
]
timestamps : {
oldestTimestamp: <timestamp>
}
}
repl
A document that reports on the replica set configuration. repl only appear when the current host is a replica set. See Replication for more information on replication.
repl.hosts
An array of the current replica set members' hostname and port information ("host:port").
repl.setName
A string with the name of the current replica set. This value reflects the --replSet command line argument, or replSetName value in the configuration file.
repl.isWritablePrimary
A boolean that indicates whether the current node is the primary of the replica set.
repl.secondary
A boolean that indicates whether the current node is a secondary member of the replica set.
repl.primary
The hostname and port information ("host:port") of the current primary member of the replica set.
repl.me
The hostname and port information ("host:port") for the current member of the replica set.
repl.userWriteBlockReason

A numeric value that represents the reason why user writes are blocked. This field is relevant only when you set userWriteBlockMode to 2 to enable write-blocking.

Possible values are:

  • 0: Unspecified
  • 1: ClusterToClusterMigrationInProgress
  • 2: DiskUseThresholdExceeded

This field corresponds to the reason parameter specified in the setUserWriteBlockMode command when write-blocking is enabled.

repl.userWriteBlockModeCounters
A document that contains counters tracking the number of times write-blocking is enabled with different reasons since the server started.
repl.userWriteBlockModeCounters.Unspecified
The number of times write-blocking is enabled with the reason Unspecified since the server started.
repl.userWriteBlockModeCounters.ClusterToClusterMigrationInProgress
The number of times write-blocking is enabled with the reason ClusterToClusterMigrationInProgress since the server started.
repl.userWriteBlockModeCounters.DiskUseThresholdExceeded
The number of times write-blocking is enabled with the reason DiskUseThresholdExceeded since the server started.
repl.primaryOnlyServices

Document that contains the number and status of instances of each primary service active on the server. Primary services can only start when a server is primary but can continue running to completion after the server changes state.

New in version 5.0.在版本5.0中新增。

repl.primaryOnlyServices.ReshardingRecipientService

Document that contains the state and number of instances of the ReshardingRecipientService.

Recipients are the shards,that would own the chunks after as a result of the resharding operation, according to the new shard key and zones.

The resharding coordinator instructs each donor and recipient shard primary, to rename the temporary sharded collection. The temporary collection becomes the new resharded collection.

New in version 5.0.在版本5.0中新增。

repl.primaryOnlyServices.RenameCollectionParticipantService

Document that contains the state and number of instances of the RenameCollectionParticipantService.

The RenameCollectionParticipantService ensures that, after a shard receives a renameCollection request, the shard is able to resume the local rename in case of system failure.

New in version 5.0.在版本5.0中新增。

repl.primaryOnlyServices.ShardingDDLCoordinator

Document that contains the state and number of instances of the ShardingDDLCoordinator.

The ShardingDDLCoordinator service manages DDL operations for primary databases such as: create database, drop database, renameCollection.

The ShardingDDLCoordinator ensures that one DDL operation for each database can happen at any one specific point in time within a sharded cluster.

New in version 5.0.在版本5.0中新增。

repl.primaryOnlyServices.ReshardingDonorService

Document that contains the state and number of instances of the ReshardingDonorService.

Donors are the shards that own chunks of the sharded collection before the rename operation completes.

The resharding coordinator instructs each donor and recipient shard primary, to rename the temporary sharded collection. The temporary collection becomes the new resharded collection.

New in version 5.0.在版本5.0中新增。

repl.rbid
Rollback identifier. Used to determine if a rollback has happened for this mongod instance.
repl.replicationProgress

An array with one document for each member of the replica set that reports replication process to this member. Typically this is the primary, or secondaries if using chained replication.

To include this output, you must pass the repl option to the serverStatus, as in the following:

db.serverStatus({ "repl": 1 })
db.runCommand({ "serverStatus": 1, "repl": 1 })

The content of the repl.replicationProgress section depends on the source of each member's replication. This section supports internal operation and is for internal and diagnostic use only.

repl.replicationProgress[n].rid
An ObjectId used as an ID for the members of the replica set. For internal use only.
repl.replicationProgress[n].optime
Information regarding the last operation from the oplog that the member applied, as reported from this member.
repl.replicationProgress[n].host
The name of the host in [hostname]:[port] format for the member of the replica set.
repl.replicationProgress[n].memberID
The integer identifier for this member of the replica set.

security

security : {
authentication : {
saslSupportedMechsReceived : <num>,
mechanisms : {
MONGODB-X509 : {
speculativeAuthenticate : {
received : Long("<num>"),
successful : Long("<num>")
},
authenticate : {
received : Long("<num>"),
successful : Long("<num>")
}
},
SCRAM-SHA-1 : {
speculativeAuthenticate : {
received : Long("<num>"),
successful : Long("<num>")
},
authenticate : {
received : Long("<num>"),
successful : Long("<num>")
}
},
SCRAM-SHA-256 : {
speculativeAuthenticate : {
received : Long("<num>"),
successful : Long("<num>")
},
authenticate : {
received : Long("<num>"),
successful : Long("<num>")
}
}
}
},
SSLServerSubjectName: <string>,
SSLServerHasCertificateAuthority: <boolean>,
SSLServerCertificateExpirationDate: <date>
},
security

A document that reports on:

  • The number of times a given authentication mechanism has been used to authenticate against the mongod or mongos instance.
  • The mongod / mongos instance's TLS/SSL certificate. (Only appears for mongod or mongos instance with support for TLS)
security.authentication.saslSupportedMechsReceived

New in version 5.0.在版本5.0中新增。

The number of times a hello request includes a valid hello.saslSupportedMechs field.

security.authentication.mechanisms

A document that reports on the number of times a given authentication mechanism has been used to authenticate against the mongod or mongos instance. The values in the document distinguish standard authentication and speculative authentication. [1]

Note

The fields in the mechanisms document depend on the configuration of the authenticationMechanisms parameter. The mechanisms document includes a field for each authentication mechanism supported by your mongod or mongos instance.

The following example shows the shape of the mechanisms document for a deployment that only supports X.509 authentication.

security.authentication.mechanisms.MONGODB-X509

A document that reports on the number of times X.509 has been used to authenticate against the mongod or mongos instance.

Includes total number of X.509 authentication attempts and the subset of those attempts which were speculative. [1]

security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.received
Number of speculative authentication attempts received using X.509. Includes both successful and failed speculative authentication attempts. [1]
security.authentication.mechanisms.MONGODB-X509.speculativeAuthenticate.successful
Number of successful speculative authentication attempts received using X.509. [1]
security.authentication.mechanisms.MONGODB-X509.authenticate.received
Number of successful and failed authentication attempts received using X.509. This value includes speculative authentication attempts received using X.509.
security.authentication.mechanisms.MONGODB-X509.authenticate.successful
Number of successful authentication attempts received using x.508. This value includes successful speculative authentication attempts which used X.509.
[1](1, 2, 3, 4) Speculative authentication minimizes the number of network round trips during the authentication process to optimize performance.
security.SSLServerSubjectName
The subject name associated with the mongod / mongos instance's TLS/SSL certificate.
security.SSLServerHasCertificateAuthority

A boolean that is:

  • true when the mongod / mongos instance's TLS/SSL certificate is associated with a certificate authority.
  • false when the TLS/SSL certificate is self-signed.
security.SSLServerCertificateExpirationDate
The expiration date and time of the mongod / mongos instance's TLS/SSL certificate.

sharding

{
configsvrConnectionString : 'csRS/cfg1.example.net:27019,cfg2.example.net:27019,cfg2.example.net:27019',
lastSeenConfigServerOpTime : {
ts : <timestamp>,
t : Long("<num>")
},
maxChunkSizeInBytes : Long("<num>")
}
sharding
A document with data regarding the sharded cluster. The lastSeenConfigServerOpTime is present only for a mongos or a shard member, not for a config server.
sharding.configsvrConnectionString
The connection string for the config servers.
sharding.lastSeenConfigServerOpTime

The latest optime of the CSRS primary that the mongos or the shard member has seen. The optime document includes:

  • ts, the Timestamp of the operation.
  • t, the term in which the operation was originally generated on the primary.

The lastSeenConfigServerOpTime is present only if the sharded cluster uses CSRS.

sharding.maxChunkSizeInBytes
The maximum size limit for a range to migrate. If this value has been updated recently on the config server, the maxChunkSizeInBytes may not reflect the most recent value.

shardingStatistics

When run on a member of a shard:

shardingStatistics : {
countStaleConfigErrors : Long("<num>"),
countDonorMoveChunkStarted : Long("<num>"),
countDonorMoveChunkCommitted : Long("<num>"),
countDonorMoveChunkAborted : Long("<num>"),
totalDonorMoveChunkTimeMillis : Long("<num>"),
totalDonorChunkCloneTimeMillis : Long("<num>"),
totalCriticalSectionCommitTimeMillis : Long("<num>"),
totalCriticalSectionTimeMillis : Long("<num>"),
countDocsClonedOnRecipient : Long("<num>"),
countBytesClonedOnRecipient : Long("<num>"),
countDocsClonedOnCatchUpOnRecipient : Long("<num>"),
countBytesClonedOnCatchUpOnRecipient : Long("<num>"),
countDocsClonedOnDonor : Long("<num>"),
countRecipientMoveChunkStarted : Long("<num>"),
countDocsDeletedByRangeDeleter : Long("<num>"),
countDonorMoveChunkLockTimeout : Long("<num>"),
unfinishedMigrationFromPreviousPrimary : Long("<num>"),
chunkMigrationConcurrency : Long("<num>"),
countTransitionToDedicatedConfigServerStarted : Long("<num>"), // Added in MongoDB 8.0
countTransitionToDedicatedConfigServerCompleted : Long("<num>"), // Added in MongoDB 8.0
countTransitionFromDedicatedConfigServerCompleted : Long("<num>"), // Added in MongoDB 8.0
catalogCache : {
numDatabaseEntries : Long("<num>"),
numCollectionEntries : Long("<num>"),
countStaleConfigErrors : Long("<num>"),
totalRefreshWaitTimeMicros : Long("<num>"),
numActiveIncrementalRefreshes : Long("<num>"),
countIncrementalRefreshesStarted : Long("<num>"),
numActiveFullRefreshes : Long("<num>"),
countFullRefreshesStarted : Long("<num>"),
countFailedRefreshes : Long("<num>")
},
rangeDeleterTasks : <num>,
configServerInShardCache : <boolean>, // Added in MongoDB 8.0
resharding : {
countStarted : Long("1"),
countSucceeded : Long("1"),
countFailed : Long("0"),
countCanceled : Long("0"),
lastOpEndingChunkImbalance : Long("0"),
active : {
documentsCopied : Long("0"),
bytesCopied : Long("0"),
countWritesToStashCollections : Long("0"),
countWritesDuringCriticalSection : Long("0"),
countReadsDuringCriticalSection : Long("0"),
oplogEntriesFetched : Long("0"),
oplogEntriesApplied : Long("0"),
insertsApplied : Long("0"),
updatesApplied : Long("0"),
deletesApplied : Long("0")
},
oldestActive : {
coordinatorAllShardsHighestRemainingOperationTimeEstimatedMillis : Long("0"),
coordinatorAllShardsLowestRemainingOperationTimeEstimatedMillis : Long("0"),
recipientRemainingOperationTimeEstimatedMillis : Long("0")
},
latencies : {
collectionCloningTotalRemoteBatchRetrievalTimeMillis : Long("0"),
collectionCloningTotalRemoteBatchesRetrieved : Long("0"),
collectionCloningTotalLocalInsertTimeMillis : Long("0"),
collectionCloningTotalLocalInserts : Long("0"),
oplogFetchingTotalRemoteBatchRetrievalTimeMillis : Long("0"),
oplogFetchingTotalRemoteBatchesRetrieved : Long("0"),
oplogFetchingTotalLocalInsertTimeMillis : Long("0"),
oplogFetchingTotalLocalInserts : Long("0"),
oplogApplyingTotalLocalBatchRetrievalTimeMillis : Long("0"),
oplogApplyingTotalLocalBatchesRetrieved : Long("0"),
oplogApplyingTotalLocalBatchApplyTimeMillis : Long("0"),
oplogApplyingTotalLocalBatchesApplied : Long("0")
},
currentInSteps : {
countInstancesInCoordinatorState1Initializing : Long("0"),
countInstancesInCoordinatorState2PreparingToDonate : Long("0"),
countInstancesInCoordinatorState3Cloning : Long("0"),
countInstancesInCoordinatorState4Applying : Long("0"),
countInstancesInCoordinatorState5BlockingWrites : Long("0"),
countInstancesInCoordinatorState6Aborting : Long("0"),
countInstancesInCoordinatorState7Committing : Long("-1"),
countInstancesInRecipientState1AwaitingFetchTimestamp : Long("0"),
countInstancesInRecipientState2CreatingCollection : Long("0"),
countInstancesInRecipientState3Cloning : Long("0"),
countInstancesInRecipientState4Applying : Long("0"),
countInstancesInRecipientState5Error : Long("0"),
countInstancesInRecipientState6StrictConsistency : Long("0"),
countInstancesInRecipientState7Done : Long("0"),
countInstancesInDonorState1PreparingToDonate : Long("0"),
countInstancesInDonorState2DonatingInitialData : Long("0"),
countInstancesInDonorState3DonatingOplogEntries : Long("0"),
countInstancesInDonorState4PreparingToBlockWrites : Long("0"),
countInstancesInDonorState5Error : Long("0"),
countInstancesInDonorState6BlockingWrites : Long("0"),
countInstancesInDonorState7Done : Long("0")
}
}
}
},

When run on a mongos:

shardingStatistics : {
numHostsTargeted: {
find : {
allShards: Long("<num>"),
manyShards: Long("<num>"),
oneShard: Long("<num>"),
unsharded: Long("<num>")
},
insert: {
allShards: Long("<num>"),
manyShards: Long("<num>"),
oneShard: Long("<num>"),
unsharded: Long("<num>")
},
update: {
allShards: Long("<num>"),
manyShards: Long("<num>"),
oneShard: Long("<num>"),
unsharded: Long("<num>")
},
delete: {
allShards: Long("<num>"),
manyShards: Long("<num>"),
oneShard: Long("<num>"),
unsharded: Long("<num>")
},
aggregate: {
allShards: Long("<num>"),
manyShards: Long("<num>"),
oneShard: Long("<num>"),
unsharded: Long("<num>")
}
}
},
catalogCache : {
numDatabaseEntries : Long("<num>"),
numCollectionEntries : Long("<num>"),
countStaleConfigErrors : Long("<num>"),
totalRefreshWaitTimeMicros : Long("<num>"),
numActiveIncrementalRefreshes : Long("<num>"),
countIncrementalRefreshesStarted : Long("<num>"),
numActiveFullRefreshes : Long("<num>"),
countFullRefreshesStarted : Long("<num>"),
countFailedRefreshes : Long("<num>")
},
configServerInShardCache : <boolean> // Added in MongoDB 8.0
}
shardingStatistics
A document which contains metrics on metadata refresh on sharded clusters.
shardingStatistics.countStaleConfigErrors

The total number of times that threads hit stale config exception. Since a stale config exception triggers a refresh of the metadata, this number is roughly proportional to the number of metadata refreshes.

Only present when run on a shard.

shardingStatistics.countDonorMoveChunkStarted

The total number of times that MongoDB starts the moveChunk command or moveRange command on the primary node of the shard as part of the range migration procedure. This increasing number does not consider whether the chunk migrations succeed or not.

Only present when run on a shard.

shardingStatistics.countDonorMoveChunkCommitted

The total number of chunk migrations that MongoDB commits on the primary node of the shard.

The chunk migration is performed by moveChunk and moveRange commands in a range migration procedure.

Only available on a shard.

Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).

shardingStatistics.countDonorMoveChunkAborted

The total number of chunk migrations that MongoDB aborts on the primary node of the shard.

The chunk migration is performed by moveChunk and moveRange commands in a range migration procedure.

Only available on a shard.

Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).

shardingStatistics.totalDonorMoveChunkTimeMillis

Cumulative time in milliseconds to move chunks from the current shard to another shard. For each chunk migration, the time starts when a moveRange or moveChunk command starts, and ends when the chunk is moved to another shard in a range migration procedure.

Only available on a shard.

Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).

shardingStatistics.totalDonorChunkCloneTimeMillis

The cumulative time, in milliseconds, that the clone phase of the range migration procedure takes on the primary node of the shard. Specifically, for each migration on this shard, the tracked time starts with the moveRange and moveChunk commands and ends before the destination shard enters a catchup phase to apply changes that occurred during the range migration procedure.

Only present when run on a shard.

shardingStatistics.totalCriticalSectionCommitTimeMillis

The cumulative time, in milliseconds, that the update metadata phase of the range migrations procedure takes on the primary node of the shard. During the update metadata phase, MongoDB blocks all operations on the collection.

Only present when run on a shard.

shardingStatistics.totalCriticalSectionTimeMillis

The cumulative time, in milliseconds, that the catch-up phase and the update metadata phase of the range migration procedure takes on the primary node of the shard.

To calculate the duration of the catch-up phase, subtract totalCriticalSectionCommitTimeMillis from totalCriticalSectionTimeMillis:

totalCriticalSectionTimeMillis - totalCriticalSectionCommitTimeMillis

Only present when run on a shard.

shardingStatistics.countDocsClonedOnRecipient

The cumulative, always-increasing count of documents that MongoDB clones on the primary node of the recipient shard.

Only present when run on a shard.

shardingStatistics.countBytesClonedOnRecipient

The cumulative number of bytes that MongoDB clones on the primary node of the recipient shard during the range migration procedure.

For details about data synchronization, see Replica Set Data Synchronization.

Only available on a shard.

Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).

shardingStatistics.countDocsClonedOnCatchUpOnRecipient

The cumulative number of documents that MongoDB clones on the primary node of the recipient shard during the catch-up phase of the range migration procedure.

For details about data synchronization, see Replica Set Data Synchronization.

Only available on a shard.

Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).

shardingStatistics.countBytesClonedOnCatchUpOnRecipient

The cumulative number of bytes that MongoDB clones on the primary node of the recipient shard during the catch-up phase of the range migration procedure.

For details about data synchronization, see Replica Set Data Synchronization.

Only available on a shard.

Available starting in MongoDB 7.1 (and 7.0, 6.3.2, 6.0.6, and 5.0.18).

shardingStatistics.countDocsClonedOnDonor

The cumulative, always-increasing count of documents that MongoDB clones on the primary node of the donor shard.

Only present when run on a shard.

shardingStatistics.countRecipientMoveChunkStarted

Cumulative, always-increasing count of chunks this member, acting as the primary of the recipient shard, has started to receive (whether the move has succeeded or not).

Only present when run on a shard.

shardingStatistics.countDocsDeletedByRangeDeleter

The cumulative, always-increasing count of documents that MongoDB deletes on the primary node of the donor shard during chunk migration.

Only present when run on a shard.

Changed in version 7.1.在版本7.1中的更改。

shardingStatistics.countDonorMoveChunkLockTimeout

The cumulative, always-increasing count of chunk migrations that MongoDB aborts on the primary node of the donor shard due to lock acquisition timeouts.

Only present when run on a shard.

shardingStatistics.unfinishedMigrationFromPreviousPrimary

The number of unfinished migrations left by the previous primary after an election. This value is only updated after the newly-elected mongod completes the transition to primary.

Only present when run on a shard.

shardingStatistics.chunkMigrationConcurrency

The number of threads on the source shard and the receiving shard for performing chunk migration operations.

Only present when run on a shard.

Available starting in MongoDB 6.3 (and 5.0.15).

shardingStatistics.catalogCache
A document with statistics about the cluster's routing information cache.
shardingStatistics.catalogCache.numDatabaseEntries
The total number of database entries that are currently in the catalog cache.
shardingStatistics.catalogCache.numCollectionEntries
The total number of collection entries (across all databases) that are currently in the catalog cache.
shardingStatistics.catalogCache.countStaleConfigErrors
The total number of times that threads hit stale config exception. A stale config exception triggers a refresh of the metadata.
shardingStatistics.catalogCache.totalRefreshWaitTimeMicros
The cumulative time, in microseconds, that threads had to wait for a refresh of the metadata.
shardingStatistics.catalogCache.numActiveIncrementalRefreshes
The number of incremental catalog cache refreshes that are currently waiting to complete.
shardingStatistics.countIncrementalRefreshesStarted
The cumulative number of incremental refreshes that have started.
shardingStatistics.catalogCache.numActiveFullRefreshes
The number of full catalog cache refreshes that are currently waiting to complete.
shardingStatistics.catalogCache.countFullRefreshesStarted
The cumulative number of full refreshes that have started.
shardingStatistics.catalogCache.countFailedRefreshes
The cumulative number of full or incremental refreshes that have failed.
shardingStatistics.countTransitionToDedicatedConfigServerStarted

Number of times the transitionToDedicatedConfigServer command has started.

Only present when run on a config server node.

New in version 8.0.在版本8.0中新增。

shardingStatistics.countTransitionToDedicatedConfigServerCompleted

Number of times the transitionToDedicatedConfigServer command has finished.

Only present when run on a config server node.

New in version 8.0.在版本8.0中新增。

shardingStatistics.countTransitionFromDedicatedConfigServerCompleted

Number of times the transitionFromDedicatedConfigServer command has finished.

Only present when run on a config server node.

New in version 8.0.在版本8.0中新增。

shardingStatistics.rangeDeleterTasks

The current total of the queued chunk range deletion tasks that are ready to run or are running as part of the range migration procedure.

Inspect the documents in the config.rangeDeletions collection for information about the chunk ranges pending deletion from a shard after a chunk migration.

Only present when run on a shard member.

shardingStatistics.configServerInShardCache
A boolean that indicates whether the config server is a config shard. This value periodically refreshes, so the value of configServerInShardCache might be stale for up to approximately one minute in a healthy cluster. If the node can't communicate with the config server, configServerInShardCache may remain stale for a longer period.
shardingStatistics.resharding

A document with statistics about resharding operations.

Each shard returns its own resharding operation statistics. If a shard is not involved in a resharding operation, then that shard will not contain statistics about the resharding operation.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.countStarted

The sum of countSucceeded, countFailed, and countCanceled. The sum is further incremented by 1 if a resharding operation has started but has not yet completed. Sum is set to 0 when mongod is started or restarted.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.countSucceeded

Number of successful resharding operations. Number is set to 0 when mongod is started or restarted.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.countFailed

Number of failed resharding operations. Number is set to 0 when mongod is started or restarted.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.countCanceled

Number of canceled resharding operations. Number is set to 0 when mongod is started or restarted.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.active.documentsCopied

Number of documents copied from donor shards to recipient shards for the current resharding operation. Number is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

Updated in version 6.1

shardingStatistics.resharding.active.bytesCopied

Number of bytes copied from donor shards to recipient shards for the current resharding operation. Number is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

Updated in version 6.1

shardingStatistics.resharding.active.countWritesToStashCollections

During resharding, the number of writes to the recipient stash collections.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.active.countWritesDuringCriticalSection

Number of writes perfomed in the critical section for the current resharding operation. The critical section prevents new incoming writes to the collection currently being resharded. Number is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

Updated in version 6.1

shardingStatistics.resharding.active.countReadsDuringCriticalSection

During resharding, the number of reads attempted during the donor's critical section.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.active.oplogEntriesFetched

Number of entries fetched from the oplog for the current resharding operation. Number is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

Updated in version 6.1

shardingStatistics.resharding.active.oplogEntriesApplied

Number of entries applied to the oplog for the current resharding operation. Number is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

Updated in version 6.1

shardingStatistics.resharding.active.insertsApplied

The total number of insert operations applied during resharding.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.active.updatesApplied

The total number of update operations applied during resharding.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.active.deletesApplied

The total number of delete operations applied during resharding.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.oldestActive.coordinatorAllShardsHighestRemainingOperationTimeEstimatedMillis

Calculated across all shards, the highest estimate of the number of seconds remaining. If the time estimate cannot be computed, the value is set to -1.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.oldestActive.coordinatorAllShardsLowestRemainingOperationTimeEstimatedMillis

Calculated across all shards, the lowest estimate of the number of seconds remaining. If the time estimate cannot be computed, the value is set to -1.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.oldestActive.recipientRemainingOperationTimeEstimatedMillis

Estimated remaining time, in milliseconds, for the current resharding operation. Prior to resharding, or when the time cannot be calculated, the value is set to -1.

If a shard is involved in multiple resharding operations, this field contains the remaining time estimate for the oldest resharding operation where this shard is a recipient.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.oldestActive.totalOperationTimeElapsedMillis

Total elapsed time, in milliseconds, for the current resharding operation. Time is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.latencies

Timing metrics for resharding operations.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.collectionCloningTotalRemoteBatchRetrievalTimeMillis

Total time recipients spent retrieving batches of documents from donors, in milliseconds.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.collectionCloningTotalRemoteBatchesRetrieved

Total number of batches of documents recipients retrieved from donors.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.collectionCloningTotalLocalInsertTimeMillis

Total time recipients spent inserting batches of documents from donors, in milliseconds.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.collectionCloningTotalLocalInserts

Total number of batches of documents from donors that recipients inserted.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogFetchingTotalRemoteBatchRetrievalTimeMillis

Total time recipients spent retrieving batches of oplog entries from donors, in milliseconds.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogFetchingTotalRemoteBatchesRetrieved

Total number of batches of oplog entries recipients retrieved from donors.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogFetchingTotalLocalInsertTimeMillis

Total time recipients spent inserting batches of oplog entries from donors, in milliseconds.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogFetchingTotalLocalInserts

Total number of batches of oplog entries from donors that recipients inserted.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchRetrievalTimeMillis

Total time recipients spent retrieving batches of oplog entries that were inserted during fetching, in milliseconds.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchesRetrieved

Total number of batches of oplog entries that were inserted during fetching that recipients retrieved.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchApplyTimeMillis

Total time recipients spent applying batches of oplog entries, in milliseconds.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.latencies.oplogApplyingTotalLocalBatchesApplied

Total number of batches of oplog entries that recipients applied.

New in version 6.1.在版本6.1中新增。

shardingStatistics.resharding.totalApplyTimeElapsedMillis

Total elapsed time, in milliseconds, for the apply step of the current resharding operation. In the apply step, recipient shards modify their data based on new incoming writes from donor shards. Time is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.totalCriticalSectionTimeElapsedMillis

Total elapsed time, in milliseconds, for the critical section of the current resharding operation. The critical section prevents new incoming writes to the collection currently being resharded. Time is set to 0 when a new resharding operation starts.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.donorState

State of the donor shard for the current resharding operation. Number is set to 0 when a new resharding operation starts.

Number ReturnedMeaningDescription描述
0unusedThe shard is not a donor in the current resharding operation.
1preparing-to-donateThe donor shard is preparing to donate data to the recipient shards.
2donating-initial-dataThe donor shard is donating data to the recipient shards.
3donating-oplog-entriesThe donor shard is donating oplog entries to the recipient shards.
4preparing-to-block-writesThe donor shard is about to prevent new incoming write operations to the collection that is being resharded.
5errorAn error occurred during the resharding operation.
6blocking-writesThe donor shard is preventing new incoming write operations and the donor shard has notified all recipient shards that new incoming writes are prevented.
7doneThe donor shard has dropped the old sharded collection and the resharding operation is complete.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.recipientState

State of the recipient shard for the current resharding operation. Number is set to 0 when a new resharding operation starts.

Number ReturnedMeaningDescription描述
0unusedShard is not a recipient in the current resharding operation.
1awaiting-fetch-timestampThe recipient shard is waiting for the donor shards to be prepared to donate their data
2creating-collectionThe recipient shard is creating the new sharded collection.
3cloningThe recipient shard is receiving data from the donor shards.
4applyingThe recipient shard is applying oplog entries to modify its copy of the data based on the new incoming writes from donor shards.
5errorAn error occurred during the resharding operation.
6strict-consistencyThe recipient shard has all data changes stored in a temporary collection.
7doneThe resharding operation is complete.

Only present when run on a shard or config server. Returns 0 on a config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.numHostsTargeted

Indicates the number of shards targeted for CRUD operations and aggregation commands. When a CRUD operation or aggregation command is run, the following metrics will be incremented.

Name名称Description描述
allShardsA command targeted all shards
manyShardsA command targeted more than one shard
oneShardA command targeted one shard
unshardedA command was run on an unsharded collection

Note

Running the serverStatus command on mongos will provide insight into the CRUD and aggregation operations that run on a sharded cluster.

Multi-shard operations can either be scatter-gather or shard specific. Multi-shard scatter-gather operations can consume more resources. By using the shardingStatistics.numHostsTargeted metrics you can tune the aggregation queries that run on a sharded cluster.

shardingStatistics.resharding.coordinatorState

State of the resharding coordinator for the current resharding operation. The resharding coordinator is a thread that runs on the config server primary. Number is set to 0 when a new resharding operation starts.

Number ReturnedMeaningDescription描述
0unusedThe shard is not the coordinator in the current resharding operation.
1initializingThe resharding coordinator has inserted the coordinator document into config.reshardingOperations and has added the reshardingFields to the config.collections entry for the original collection.
2preparing-to-donate

The resharding coordinator

  • has created a config.collections entry for the temporary resharding collection.
  • has inserted entries into config.chunks for ranges based on the new shard key.
  • has inserted entries into config.tags for any zones associated with the new shard key.

The coordinator informs participant shards to begin the resharding operation. The coordinator then waits until all donor shards have picked a minFetchTimestamp and are ready to donate.

3cloningThe resharding coordinator informs donor shards to donate data to recipient shards. The coordinator waits for all recipients to finish cloning the data from the donor.
4applyingThe resharding coordinator informs recipient shards to modify their copies of data based on new incoming writes from donor shards. The coordinator waits for all recipients to finish applying oplog entries.
5blocking-writesThe resharding coordinator informs donor shards to prevent new incoming write operations to the collection being resharded. The coordinator then waits for all recipients to have all data changes.
6abortingAn unrecoverable error occurred during the resharding operation or the abortReshardCollection command (or the sh.abortReshardCollection() method) was run.
6committingThe resharding coordinator removes the config.collections entry for the temporary resharding collection. The coordinator then adds the recipientFields to the source collection's entry.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.opStatus

Status for the current resharding operation.

Number ReturnedDescription描述
-1Resharding operation not in progress.
0Resharding operation succeeded.
1Resharding operation failed.
2Resharding operation canceled.

Only present when run on a shard or config server.

New in version 5.0.在版本5.0中新增。

shardingStatistics.resharding.lastOpEndingChunkImbalance

This field contains the highest numeric difference for (maxNumChunksInShard - minNumChunksInShard) among all zones for the collection that was processed by the most recent resharding operation.

See Range Size.

Only updated on config servers.

New in version 5.0.在版本5.0中新增。

shardedIndexConsistency

shardedIndexConsistency : {
numShardedCollectionsWithInconsistentIndexes : Long("<num>")
},
shardedIndexConsistency

Available only on config server instances.

A document that returns results of index consistency checks for sharded collections.

The returned metrics are meaningful only when run on the primary of the config server replica set for a sharded cluster.

shardedIndexConsistency.numShardedCollectionsWithInconsistentIndexes

Available only on config server instances.

Number of sharded collections whose indexes are inconsistent across the shards. A sharded collection has an inconsistent index if the collection does not have the exact same indexes (including the index options) on each shard that contains chunks for the collection.

To investigate if a sharded collection has inconsistent indexes, see Find Inconsistent Indexes Across Shards.

The returned metrics are meaningful only when run on the primary of the config server replica set for a sharded cluster.

spillWiredTiger

spillWiredTiger: {
storageSize: <long>,
uri: <string>,
version: <string>,
'block-manager': {
'blocks read': <num>,
'blocks written': <num>,
'bytes read': <num>,
'bytes written': <num>
},
cache: {
'application thread time evicting (usecs)': <num>,
'application threads eviction requested with cache fill ratio < 25%': <num>,
'application threads eviction requested with cache fill ratio >= 75%': <num>,
'application threads page write from cache to disk count': <num>,
'application threads page write from cache to disk time (usecs)': <num>,
'bytes allocated for updates': <num>,
'bytes currently in the cache': <num>,
'bytes read into cache': <num>,
'bytes written from cache': <num>,
'eviction currently operating in aggressive mode': <num>,
'eviction empty score': <num>,
'eviction state': <num>,
'eviction walk target strategy clean pages': <num>,
'eviction walk target strategy dirty pages': <num>,
'eviction walk target strategy pages with updates': <num>,
'forced eviction - pages evicted that were clean count': <num>,
'forced eviction - pages evicted that were dirty count': <num>,
'forced eviction - pages selected count': <num>,
'forced eviction - pages selected unable to be evicted count': <num>,
'hazard pointer blocked page eviction': <num>,
'maximum bytes configured': <num>,
'maximum page size seen at eviction': <num>,
'number of times dirty trigger was reached': <num>,
'number of times eviction trigger was reached': <num>,
'number of times updates trigger was reached': <num>,
'page evict attempts by application threads': <num>,
'page evict failures by application threads': <num>,
'pages queued for eviction': <num>,
'pages queued for urgent eviction': <num>,
'tracked dirty bytes in the cache': <num>
}
}
spillWiredTiger

A document that contains metrics on the WiredTiger spill instance. When MongoDB writes to disk to fulfill certain operations, it utilizes a separate WiredTiger instance, which contains its own in-memory cache. This separate cache isolates operations from the main WiredTiger cache.

The spillWiredTiger document contains a subset of the fields reported in the wiredTiger document. The spillWiredTiger document only appears when using the WiredTiger storage engine. For details on the spillWiredTiger metrics, see the corresponding wiredTiger metric description.

storageEngine

storageEngine : {
name : <string>,
supportsCommittedReads : <boolean>,
persistent : <boolean>
},
storageEngine
A document with data about the current storage engine.
storageEngine.name
The name of the current storage engine.
storageEngine.supportsCommittedReads
A boolean that indicates whether the storage engine supports "majority" read concern.
storageEngine.persistent
A boolean that indicates whether the storage engine does or does not persist data to disk.

tcmalloc

Note

tcmalloc metrics that are only for internal use are omitted from this page.

tcmalloc : {
usingPerCPUCaches : <boolean>, // Added in MongoDB 8.0
maxPerCPUCacheSizeBytes : <integer>, // Added in MongoDB 8.0
generic : {
current_allocated_bytes : <integer>,
heap_size : <integer>,
peak_memory_usage : <integer> // Added in MongoDB 8.0
},
tcmalloc : {
central_cache_free : <integer>,
cpu_free : <integer>, // Added in MongoDB 8.0
release_rate : <integer>,
total_bytes_held : <integer>, // Added in MongoDB 8.0
cpuCache : {
0 : {
overflows : <integer>, // Added in MongoDB 8.0
underflows : <integer> // Added in MongoDB 8.0
},
}
},
tcmalloc_derived : {
total_free_bytes : <integer> // Added in MongoDB 8.0
}
}
tcmalloc

Note

Starting in version 8.0, MongoDB uses an updated version of TCMalloc that improves memory fragmentation and management. See tcmalloc upgrade for more information.

A document that contains information on memory allocation for the server. By default, tcmalloc metrics are included in the serverStatus output. To change the verbosity of the tcmalloc section, specify an integer between 0 and 3 (inclusive):

  • If you set verbosity to 0, tcmalloc metrics aren't included in the serverStatus output.
  • If you set verbosity to 1, the serverStatus output includes the default tcmalloc metrics.
  • If you set verbosity to 2, the serverStatus output includes default tcmalloc metrics and the tcmalloc.tcmalloc.cpuCache section.
  • If you set verbosity to 3, the serverStatus output includes all tcmalloc metrics.

If you specify a value higher than 3, MongoDB sets the verbosity to 3.

For example, to call serverStatus with verbosity set to 2, run the following command:

db.runCommand( { serverStatus: 1, tcmalloc: 2 } )
tcmalloc.usingPerCPUCaches

A boolean that indicates whether TCMalloc is running with per-CPU caches. If tcmalloc.usingPerCPUCaches is false, ensure that:

New in version 8.0.在版本8.0中新增。

tcmalloc.maxPerCPUCacheSizeBytes

Maximum size, in bytes, of each CPU cache.

New in version 8.0.在版本8.0中新增。

tcmalloc.generic.peak_memory_usage

Total amount of memory, in bytes, allocated by MongoDB and sampled by TCMalloc.

New in version 8.0.在版本8.0中新增。

tcmalloc.generic.current_allocated_bytes
Total number of bytes that are currently allocated to memory and actively used by MongoDB.
tcmalloc.generic.heap_size
Amount of memory, in bytes, allocated from the operating system. This value includes memory that's currently in use and memory that's been allocated but isn't in use.
tcmalloc.tcmalloc.central_cache_free
Amount of memory, in bytes, held in the central free list. The central free list is a structure that manages free memory for reuse.
tcmalloc.tcmalloc.cpu_free

Amount of free memory, in bytes, available across all CPU caches.

New in version 8.0.在版本8.0中新增。

tcmalloc.tcmalloc.total_bytes_held

Amount of memory, in bytes, currently held in caches.

New in version 8.0.在版本8.0中新增。

tcmalloc.tcmalloc.release_rate
Rate, in bytes per second, at which unused memory is released to the operating system. The tcmallocReleaseRate parameter determines the value of tcmalloc.tcmalloc.release_rate.
tcmalloc.tcmalloc.cpuCache

A document that provides data on each CPU cache.

cpuCache metrics are excluded at the default verbosity level. To view cpuCache metrics, you must set the tcmalloc verbosity to at least 2.

New in version 8.0.在版本8.0中新增。

tcmalloc.tcmalloc.cpuCache.N.overflows

Number of overflows that the CPU cache experienced. Overflows occur when a user deallocates memory and the cache is full.

New in version 8.0.在版本8.0中新增。

tcmalloc.tcmalloc.cpuCache.N.underflows

Number of underflows that the CPU cache experienced. Underflows occur when a user allocates memory and the cache is empty.

New in version 8.0.在版本8.0中新增。

tcmalloc.tcmalloc_derived.total_free_bytes

Amount of memory remaining before tcmalloc has to request for more memory from the operating system.

New in version 8.0.在版本8.0中新增。

transactions事务

transactions : {
retriedCommandsCount : Long("<num>"),
retriedStatementsCount : Long("<num>"),
transactionsCollectionWriteCount : Long("<num>"),
currentActive : Long("<num>"),
currentInactive : Long("<num>"),
currentOpen : Long("<num>"),
totalAborted : Long("<num>"),
totalCommitted : Long("<num>"),
totalStarted : Long("<num>"),
totalPrepared : Long("<num>"),
totalPreparedThenCommitted : Long("<num>"),
totalPreparedThenAborted : Long("<num>"),
currentPrepared : Long("<num>"),
lastCommittedTransaction : <document>
},
transactions : {
currentOpen : Long("<num>"),
currentActive : Long("<num>"),
currentInactive : Long("<num>"),
totalStarted : Long("<num>"),
totalCommitted : Long("<num>"),
totalAborted : Long("<num>"),
abortCause : {
<String1> : Long("<num>"),
<String2> : Long("<num>"),
...
},
totalContactedParticipants : Long("<num>"),
totalParticipantsAtCommit : Long("<num>"),
totalRequestsTargeted : Long("<num>"),
commitTypes : {
noShards : {
initiated : Long("<num>"),
successful : Long("<num>"),
successfulDurationMicros : Long("<num>")
},
singleShard : {
initiated : Long("<num>"),
successful : Long("<num>"),
successfulDurationMicros : Long("<num>")
},
singleWriteShard : {
initiated : Long("<num>"),
successful : Long("<num>"),
successfulDurationMicros : Long("<num>")
},
readOnly : {
initiated : Long("<num>"),
successful : Long("<num>"),
successfulDurationMicros : Long("<num>")
},
twoPhaseCommit : {
initiated : Long("<num>"),
successful : Long("<num>"),
successfulDurationMicros : Long("<num>")
},
recoverWithToken : {
initiated : Long("<num>"),
successful : Long("<num>"),
successfulDurationMicros : Long("<num>")
}
}
},
transactions

When run on a mongod, a document with data about the retryable writes and transactions.

When run on a mongos, a document with data about the transactions run on the instance.

transactions.retriedCommandsCount

Available on mongod only.

The total number of retry attempts that have been received after the corresponding retryable write command has already been committed. That is, a retryable write is attempted even though the write has previously succeeded and has an associated record for the transaction and session in the config.transactions collection, such as when the initial write response to the client is lost.

Note

MongoDB does not re-execute the committed writes.

The total is across all sessions.

The total does not include any retryable writes that may happen internally as part of a chunk migration.

transactions.retriedStatementsCount

Available on mongod only.

The total number of write statements associated with the retried commands in transactions.retriedCommandsCount.

Note

MongoDB does not re-execute the committed writes.

The total does not include any retryable writes that may happen internally as part of a chunk migration.

transactions.transactionsCollectionWriteCount

Available on mongod only.

The total number of writes to the config.transactions collection, triggered when a new retryable write statement is committed.

For update and delete commands, since only single document operations are retryable, there is one write per statement.

For insert operations, there is one write per batch of documents inserted, except when a failure leads to each document being inserted separately.

The total includes writes to a server's config.transactions collection that occur as part of a migration.

transactions.currentActive

Available on both mongod and mongos.

The total number of open transactions currently executing a command.

transactions.currentInactive

Available on both mongod and mongos.

The total number of open transactions that are not currently executing a command.

transactions.currentOpen

Available on both mongod and mongos.

The total number of open transactions. A transaction is opened when the first command is run as a part of that transaction, and stays open until the transaction either commits or aborts.

transactions.totalAborted

For the mongod, the total number of transactions aborted on this instance since its last startup.

For the mongos, the total number of transactions aborted through this instance since its last startup.

transactions.totalCommitted

For the mongod, the total number of transactions committed on the instance since its last startup.

For the mongos,the total number of transactions committed through this instance since its last startup.

transactions.totalStarted

For the mongod, the total number of transactions started on this instance since its last startup.

For the mongos, the total number of transactions started on this instance since its last startup.

transactions.abortCause

Available on mongos only.

Breakdown of the transactions.totalAborted by cause. If a client issues an explicit abortTransaction, the cause is listed as abort.

For example:例如:

totalAborted : Long("5"),
abortCause : {
abort : Long("1"),
DuplicateKey : Long("1"),
StaleConfig : Long("3"),
SnapshotTooOld : Long("1")
},
transactions.totalContactedParticipants

Available on mongos only.

The total number of shards contacted for all transactions started through this mongos since its last startup.

The number of shards contacted during the transaction processes can include those shards that may not be included as part of the commit.

transactions.totalParticipantsAtCommit

Available on mongos only.

Total number of shards involved in the commit for all transactions started through this mongos since its last startup.

transactions.totalRequestsTargeted

Available on mongos only.

Total number of network requests targeted by the mongos as part of its transactions.

transactions.commitTypes

Available on mongos only.

Breakdown of the commits by types. For example:

noShards : {
initiated : Long("0"),
successful : Long("0"),
successfulDurationMicros : Long("0")
},
singleShard : {
initiated : Long("5"),
successful : Long("5"),
successfulDurationMicros : Long("203118")
},
singleWriteShard : {
initiated : Long("0"),
successful : Long("0"),
successfulDurationMicros : Long("0")
},
readOnly : {
initiated : Long("0"),
successful : Long("0"),
successfulDurationMicros : Long("0")
},
twoPhaseCommit : {
initiated : Long("1"),
successful : Long("1"),
successfulDurationMicros : Long("179616")
},
recoverWithToken : {
initiated : Long("0"),
successful : Long("0"),
successfulDurationMicros : Long("0")
}

The types of commit are:

Type类型Description描述
noShardsCommits of transactions that did not contact any shards.
singleShardCommits of transactions that affected a single shard.
singleWriteShardCommits of transactions that contacted multiple shards but whose write operations only affected a single shard.
readOnlyCommits of transactions that only involved read operations.
twoPhaseCommitCommits of transactions that included writes to multiple shards
recoverWithTokenCommits that recovered the outcome of transactions from another instance or after this instance was restarted.

For each commit type, the command returns the following metrics:

MetricsDescription描述
initiatedTotal number of times that commits of this type were initiated.
successfulTotal number of times that commits of this type succeeded.
successfulDurationMicrosTotal time, in microseconds, taken by successful commits of this type.
transactions.totalPrepared

Available on mongod only.

The total number of transactions in prepared state on this server since the mongod process's last startup.

transactions.totalPreparedThenCommitted

Available on mongod only.

The total number of transactions that were prepared and committed on this server since the mongod process's last startup.

transactions.totalPreparedThenAborted

Available on mongod only.

The total number of transactions that were prepared and aborted on this server since the mongod process's last startup.

transactions.currentPrepared

Available on mongod only.

The current number of transactions in prepared state on this server.

transactions.lastCommittedTransaction

Available on mongod only.

The details of the last transaction committed when the mongod is primary.

When returned from a secondary, lastCommittedTransaction returns the details of the last transaction committed when that secondary was a primary.

lastCommittedTransaction : {
operationCount : Long("1"),
oplogOperationBytes : Long("211"),
writeConcern : {
w : "majority",
wtimeout : 0
}
}
MetricsDescription描述
operationCountThe number of write operations in the transaction.
oplogOperationBytesThe size of the corresponding oplog entry or entries for the transaction. [2]
writeConcernThe write concern used for the transaction.
[2] MongoDB creates as many oplog entries as necessary to encapsulate all write operations in a transaction. See Oplog Size Limit for details.

transportSecurity

transportSecurity : {
1.0 : Long("<num>"),
1.1 : Long("<num>"),
1.2 : Long("<num>"),
1.3 : Long("<num>"),
unknown : Long("<num>")
},
transportSecurity.<version>
The cumulative number of TLS <version> connections that have been made to this mongod or mongos instance. The value is reset upon restart.

watchdog

watchdog : {
checkGeneration : Long("<num>"),
monitorGeneration : Long("<num>"),
monitorPeriod : <num>
}

Note

The watchdog section is only present if the Storage Node Watchdog is enabled.

watchdog
A document reporting the status of the Storage Node Watchdog.
watchdog.checkGeneration
The number of times the directories have been checked since startup. Directories are checked multiple times every monitoringPeriod.
watchdog.monitorGeneration
The number of times the status of all filesystems used by mongod has been examined. This is incremented once every monitoringPeriod.
watchdog.monitorPeriod
The value set by watchdogPeriodSeconds. This is the period between status checks.

wiredTiger

wiredTiger information only appears if using the WiredTiger storage engine. Some of the statistics roll up for the server.

{
uri : 'statistics:',
version: <string>,
async : {
current work queue length : <num>,
maximum work queue length : <num>,
number of allocation state races : <num>,
number of flush calls : <num>,
number of operation slots viewed for allocation : <num>,
number of times operation allocation failed : <num>,
number of times worker found no work : <num>,
total allocations : <num>,
total compact calls : <num>,
total insert calls : <num>,
total remove calls : <num>,
total search calls : <num>,
total update calls : <num>
},
block-manager : {
blocks pre-loaded : <num>,
blocks read : <num>,
blocks written : <num>,
bytes read : <num>,
bytes written : <num>,
bytes written for checkpoint : <num>,
mapped blocks read : <num>,
mapped bytes read : <num>
},
cache : {
application threads page read from disk to cache count : <num>,
application threads page read from disk to cache time (usecs) : <num>,
application threads page write from cache to disk count : <num>,
application threads page write from cache to disk time (usecs) : <num>,
bytes belonging to page images in the cache : <num>,
bytes belonging to the cache overflow table in the cache : <num>,
bytes currently in the cache : <num>,
bytes dirty in the cache cumulative : <num>,
bytes not belonging to page images in the cache : <num>,
bytes read into cache : <num>,
bytes written from cache : <num>,
cache overflow cursor application thread wait time (usecs) : <num>,
cache overflow cursor internal thread wait time (usecs) : <num>,
cache overflow score : <num>,
cache overflow table entries : <num>,
cache overflow table insert calls : <num>,
cache overflow table max on-disk size : <num>,
cache overflow table on-disk size : <num>,
cache overflow table remove calls : <num>,
checkpoint blocked page eviction : <num>,
eviction calls to get a page : <num>,
eviction calls to get a page found queue empty : <num>,
eviction calls to get a page found queue empty after locking : <num>,
eviction currently operating in aggressive mode : <num>,
eviction empty score : <num>,
eviction passes of a file : <num>,
eviction server candidate queue empty when topping up : <num>,
eviction server candidate queue not empty when topping up : <num>,
eviction server evicting pages : <num>,
eviction server slept, because we did not make progress with eviction : <num>,
eviction server unable to reach eviction goal : <num>,
eviction server waiting for a leaf page : <num>,
eviction server waiting for an internal page sleep (usec) : <num>,
eviction server waiting for an internal page yields : <num>,
eviction state : <num>,
eviction walk target pages histogram - 0-9 : <num>,
eviction walk target pages histogram - 10-31 : <num>,
eviction walk target pages histogram - 128 and higher : <num>,
eviction walk target pages histogram - 32-63 : <num>,
eviction walk target pages histogram - 64-128 : <num>,
eviction walks abandoned : <num>,
eviction walks gave up because they restarted their walk twice : <num>,
eviction walks gave up because they saw too many pages and found no candidates : <num>,
eviction walks gave up because they saw too many pages and found too few candidates : <num>,
eviction walks reached end of tree : <num>,
eviction walks started from root of tree : <num>,
eviction walks started from saved location in tree : <num>,
eviction worker thread active : <num>,
eviction worker thread created : <num>,
eviction worker thread evicting pages : <num>,
eviction worker thread removed : <num>,
eviction worker thread stable number : <num>,
files with active eviction walks : <num>,
files with new eviction walks started : <num>,
force re-tuning of eviction workers once in a while : <num>,
forced eviction - pages evicted that were clean count : <num>,
forced eviction - pages evicted that were clean time (usecs) : <num>,
forced eviction - pages evicted that were dirty count : <num>,
forced eviction - pages evicted that were dirty time (usecs) : <num>,
forced eviction - pages selected because of too many deleted items count : <num>,
forced eviction - pages selected count : <num>,
forced eviction - pages selected unable to be evicted count : <num>,
forced eviction - pages selected unable to be evicted time : <num>,
hazard pointer blocked page eviction : <num>,
hazard pointer check calls : <num>,
hazard pointer check entries walked : <num>,
hazard pointer maximum array length : <num>,
in-memory page passed criteria to be split : <num>,
in-memory page splits : <num>,
internal pages evicted : <num>,
internal pages split during eviction : <num>,
leaf pages split during eviction : <num>,
maximum bytes configured : <num>,
maximum page size at eviction : <num>,
modified pages evicted : <num>,
modified pages evicted by application threads : <num>,
operations timed out waiting for space in cache : <num>,
overflow pages read into cache : <num>,
page split during eviction deepened the tree : <num>,
page written requiring cache overflow records : <num>,
pages currently held in the cache : <num>,
pages evicted by application threads : <num>,
pages queued for eviction : <num>,
pages queued for eviction post lru sorting : <num>,
pages queued for urgent eviction : <num>,
pages queued for urgent eviction during walk : <num>,
pages read into cache : <num>,
pages read into cache after truncate : <num>,
pages read into cache after truncate in prepare state : <num>,
pages read into cache requiring cache overflow entries : <num>,
pages read into cache requiring cache overflow for checkpoint : <num>,
pages read into cache skipping older cache overflow entries : <num>,
pages read into cache with skipped cache overflow entries needed later : <num>,
pages read into cache with skipped cache overflow entries needed later by checkpoint : <num>,
pages requested from the cache : <num>,
pages seen by eviction walk : <num>,
pages selected for eviction unable to be evicted : <num>,
pages walked for eviction : <num>,
pages written from cache : <num>,
pages written requiring in-memory restoration : <num>,
percentage overhead : <num>,
tracked bytes belonging to internal pages in the cache : <num>,
tracked bytes belonging to leaf pages in the cache : <num>,
tracked dirty bytes in the cache : <num>,
tracked dirty pages in the cache : <num>,
unmodified pages evicted : <num>
},
capacity : {
background fsync file handles considered : <num>,
background fsync file handles synced : <num>,
background fsync time (msecs) : <num>,
bytes read : <num>,
bytes written for checkpoint : <num>,
bytes written for eviction : <num>,
bytes written for log : <num>,
bytes written total : <num>,
threshold to call fsync : <num>,
time waiting due to total capacity (usecs) : <num>,
time waiting during checkpoint (usecs) : <num>,
time waiting during eviction (usecs) : <num>,
time waiting during logging (usecs) : <num>,
time waiting during read (usecs) : <num>
},
connection : {
auto adjusting condition resets : <num>,
auto adjusting condition wait calls : <num>,
detected system time went backwards : <num>,
files currently open : <num>,
memory allocations : <num>,
memory frees : <num>,
memory re-allocations : <num>,
pthread mutex condition wait calls : <num>,
pthread mutex shared lock read-lock calls : <num>,
pthread mutex shared lock write-lock calls : <num>,
total fsync I/Os : <num>,
total read I/Os : <num>,
total write I/Os : <num>
},
cursor : {
cached cursor count : <num>,
cursor bulk loaded cursor insert calls : <num>,
cursor close calls that result in cache : <num>,
cursor create calls : <num>,
cursor insert calls : <num>,
cursor insert key and value bytes : <num>,
cursor modify calls : <num>,
cursor modify key and value bytes affected : <num>,
cursor modify value bytes modified : <num>,
cursor next calls : <num>,
cursor operation restarted : <num>,
cursor prev calls : <num>,
cursor remove calls : <num>,
cursor remove key bytes removed : <num>,
cursor reserve calls : <num>,
cursor reset calls : <num>,
cursor search calls : <num>,
cursor search near calls : <num>,
cursor sweep buckets : <num>,
cursor sweep cursors closed : <num>,
cursor sweep cursors examined : <num>,
cursor sweeps : <num>,
cursor truncate calls : <num>,
cursor update calls : <num>,
cursor update key and value bytes : <num>,
cursor update value size change : <num>,
cursors reused from cache : <num>,
open cursor count : <num>
},
data-handle : {
connection data handle size : <num>,
connection data handles currently active : <num>,
connection sweep candidate became referenced : <num>,
connection sweep dhandles closed : <num>,
connection sweep dhandles removed from hash list : <num>,
connection sweep time-of-death sets : <num>,
connection sweeps : <num>,
session dhandles swept : <num>,
session sweep attempts : <num>
},
lock : {
checkpoint lock acquisitions : <num>,
checkpoint lock application thread wait time (usecs) : <num>,
checkpoint lock internal thread wait time (usecs) : <num>,
dhandle lock application thread time waiting (usecs) : <num>,
dhandle lock internal thread time waiting (usecs) : <num>,
dhandle read lock acquisitions : <num>,
dhandle write lock acquisitions : <num>,
durable timestamp queue lock application thread time waiting (usecs) : <num>,
durable timestamp queue lock internal thread time waiting (usecs) : <num>,
durable timestamp queue read lock acquisitions : <num>,
durable timestamp queue write lock acquisitions : <num>,
metadata lock acquisitions : <num>,
metadata lock application thread wait time (usecs) : <num>,
metadata lock internal thread wait time (usecs) : <num>,
read timestamp queue lock application thread time waiting (usecs) : <num>,
read timestamp queue lock internal thread time waiting (usecs) : <num>,
read timestamp queue read lock acquisitions : <num>,
read timestamp queue write lock acquisitions : <num>,
schema lock acquisitions : <num>,
schema lock application thread wait time (usecs) : <num>,
schema lock internal thread wait time (usecs) : <num>,
table lock application thread time waiting for the table lock (usecs) : <num>,
table lock internal thread time waiting for the table lock (usecs) : <num>,
table read lock acquisitions : <num>,
table write lock acquisitions : <num>,
txn global lock application thread time waiting (usecs) : <num>,
txn global lock internal thread time waiting (usecs) : <num>,
txn global read lock acquisitions : <num>,
txn global write lock acquisitions : <num>
},
log : {
busy returns attempting to switch slots : <num>,
force archive time sleeping (usecs) : <num>,
log bytes of payload data : <num>,
log bytes written : <num>,
log files manually zero-filled : <num>,
log flush operations : <num>,
log force write operations : <num>,
log force write operations skipped : <num>,
log records compressed : <num>,
log records not compressed : <num>,
log records too small to compress : <num>,
log release advances write LSN : <num>,
log scan operations : <num>,
log scan records requiring two reads : <num>,
log server thread advances write LSN : <num>,
log server thread write LSN walk skipped : <num>,
log sync operations : <num>,
log sync time duration (usecs) : <num>,
log sync_dir operations : <num>,
log sync_dir time duration (usecs) : <num>,
log write operations : <num>,
logging bytes consolidated : <num>,
maximum log file size : <num>,
number of pre-allocated log files to create : <num>,
pre-allocated log files not ready and missed : <num>,
pre-allocated log files prepared : <num>,
pre-allocated log files used : <num>,
records processed by log scan : <num>,
slot close lost race : <num>,
slot close unbuffered waits : <num>,
slot closures : <num>,
slot join atomic update races : <num>,
slot join calls atomic updates raced : <num>,
slot join calls did not yield : <num>,
slot join calls found active slot closed : <num>,
slot join calls slept : <num>,
slot join calls yielded : <num>,
slot join found active slot closed : <num>,
slot joins yield time (usecs) : <num>,
slot transitions unable to find free slot : <num>,
slot unbuffered writes : <num>,
total in-memory size of compressed records : <num>,
total log buffer size : <num>,
total size of compressed records : <num>,
written slots coalesced : <num>,
yields waiting for previous log file close : <num>
},
perf : {
file system read latency histogram (bucket 1) - 10-49ms : <num>,
file system read latency histogram (bucket 2) - 50-99ms : <num>,
file system read latency histogram (bucket 3) - 100-249ms : <num>,
file system read latency histogram (bucket 4) - 250-499ms : <num>,
file system read latency histogram (bucket 5) - 500-999ms : <num>,
file system read latency histogram (bucket 6) - 1000ms+ : <num>,
file system write latency histogram (bucket 1) - 10-49ms : <num>,
file system write latency histogram (bucket 2) - 50-99ms : <num>,
file system write latency histogram (bucket 3) - 100-249ms : <num>,
file system write latency histogram (bucket 4) - 250-499ms : <num>,
file system write latency histogram (bucket 5) - 500-999ms : <num>,
file system write latency histogram (bucket 6) - 1000ms+ : <num>,
operation read latency histogram (bucket 1) - 100-249us : <num>,
operation read latency histogram (bucket 2) - 250-499us : <num>,
operation read latency histogram (bucket 3) - 500-999us : <num>,
operation read latency histogram (bucket 4) - 1000-9999us : <num>,
operation read latency histogram (bucket 5) - 10000us+ : <num>,
operation write latency histogram (bucket 1) - 100-249us : <num>,
operation write latency histogram (bucket 2) - 250-499us : <num>,
operation write latency histogram (bucket 3) - 500-999us : <num>,
operation write latency histogram (bucket 4) - 1000-9999us : <num>,
operation write latency histogram (bucket 5) - 10000us+ : <num>
},
reconciliation : {
fast-path pages deleted : <num>,
page reconciliation calls : <num>,
page reconciliation calls for eviction : <num>,
pages deleted : <num>,
split bytes currently awaiting free : <num>,
split objects currently awaiting free : <num>
},
session : {
open session count : <num>,
session query timestamp calls : <num>,
table alter failed calls : <num>,
table alter successful calls : <num>,
table alter unchanged and skipped : <num>,
table compact failed calls : <num>,
table compact successful calls : <num>,
table create failed calls : <num>,
table create successful calls : <num>,
table drop failed calls : <num>,
table drop successful calls : <num>,
table import failed calls : <num>,
table import successful calls : <num>,
table rebalance failed calls : <num>,
table rebalance successful calls : <num>,
table rename failed calls : <num>,
table rename successful calls : <num>,
table salvage failed calls : <num>,
table salvage successful calls : <num>,
table truncate failed calls : <num>,
table truncate successful calls : <num>,
table verify failed calls : <num>,
table verify successful calls : <num>
},
thread-state : {
active filesystem fsync calls : <num>,
active filesystem read calls : <num>,
active filesystem write calls : <num>
},
thread-yield : {
application thread time evicting (usecs) : <num>,
application thread time waiting for cache (usecs) : <num>,
connection close blocked waiting for transaction state stabilization : <num>,
connection close yielded for lsm manager shutdown : <num>,
data handle lock yielded : <num>,
get reference for page index and slot time sleeping (usecs) : <num>,
log server sync yielded for log write : <num>,
page access yielded due to prepare state change : <num>,
page acquire busy blocked : <num>,
page acquire eviction blocked : <num>,
page acquire locked blocked : <num>,
page acquire read blocked : <num>,
page acquire time sleeping (usecs) : <num>,
page delete rollback time sleeping for state change (usecs) : <num>,
page reconciliation yielded due to child modification : <num>
},
transaction : {
Number of prepared updates : <num>,
Number of prepared updates added to cache overflow : <num>,
Number of prepared updates resolved : <num>,
durable timestamp queue entries walked : <num>,
durable timestamp queue insert to empty : <num>,
durable timestamp queue inserts to head : <num>,
durable timestamp queue inserts total : <num>,
durable timestamp queue length : <num>,
number of named snapshots created : <num>,
number of named snapshots dropped : <num>,
prepared transactions : <num>,
prepared transactions committed : <num>,
prepared transactions currently active : <num>,
prepared transactions rolled back : <num>,
query timestamp calls : <num>,
read timestamp queue entries walked : <num>,
read timestamp queue insert to empty : <num>,
read timestamp queue inserts to head : <num>,
read timestamp queue inserts total : <num>,
read timestamp queue length : <num>,
rollback to stable calls : <num>,
rollback to stable updates aborted : <num>,
rollback to stable updates removed from cache overflow : <num>,
set timestamp calls : <num>,
set timestamp durable calls : <num>,
set timestamp durable updates : <num>,
set timestamp oldest calls : <num>,
set timestamp oldest updates : <num>,
set timestamp stable calls : <num>,
set timestamp stable updates : <num>,
transaction begins : <num>,
transaction checkpoint currently running : <num>,
transaction checkpoint generation : <num>,
transaction checkpoint max time (msecs) : <num>,
transaction checkpoint min time (msecs) : <num>,
transaction checkpoint most recent time (msecs) : <num>,
transaction checkpoint scrub dirty target : <num>,
transaction checkpoint scrub time (msecs) : <num>,
transaction checkpoint total time (msecs) : <num>,
transaction checkpoints : <num>,
transaction checkpoints skipped because database was clean : <num>,
transaction failures due to cache overflow : <num>,
transaction fsync calls for checkpoint after allocating the transaction ID : <num>,
transaction fsync duration for checkpoint after allocating the transaction ID (usecs) : <num>,
transaction range of IDs currently pinned : <num>,
transaction range of IDs currently pinned by a checkpoint : <num>,
transaction range of IDs currently pinned by named snapshots : <num>,
transaction range of timestamps currently pinned : <num>,
transaction range of timestamps pinned by a checkpoint : <num>,
transaction range of timestamps pinned by the oldest active read timestamp : <num>,
transaction range of timestamps pinned by the oldest timestamp : <num>,
transaction read timestamp of the oldest active reader : <num>,
transaction sync calls : <num>,
transactions committed : <num>,
transactions rolled back : <num>,
update conflicts : <num>
},
concurrentTransactions : {
write : {
out : <num>,
available : <num>,
totalTickets : <num>
},
read : {
out : <num>,
available : <num>,
totalTickets : <num>
},
monitor : {
timesDecreased: <num>,
timesIncreased: <num>,
totalAmountDecreased: <num>,
totalAmountIncreased: <num>
}
},
snapshot-window-settings : {
total number of SnapshotTooOld errors : <num>,
max target available snapshots window size in seconds : <num>,
target available snapshots window size in seconds : <num>,
current available snapshots window size in seconds : <num>,
latest majority snapshot timestamp available : <string>,
oldest majority snapshot timestamp available : <string>
}
}

Note

The following is not an exhaustive list.

wiredTiger.uri
A string. For internal use by MongoDB.
wiredTiger.version

A string that returns the WiredTiger storage engine version.

New in version 8.1.在版本8.1中新增。

wiredTiger.async
A document that returns statistics related to the asynchronous operations API. This is unused by MongoDB.
wiredTiger.block-manager
A document that returns statistics on the block manager operations.
wiredTiger.cache

A document that returns statistics on the cache and page evictions from the cache.

The following describes some of the key wiredTiger.cache statistics:

wiredTiger.cache.maximum bytes configured
Maximum cache size.最大缓存大小。
wiredTiger.cache.bytes currently in the cache
Size in bytes of the data currently in cache. This value should not be greater than the maximum bytes configured value.
wiredTiger.cache.unmodified pages evicted
Main statistics for page eviction.
wiredTiger.cache.tracked dirty bytes in the cache
Size in bytes of the dirty data in the cache. This value should be less than the bytes currently in the cache value.缓存中脏数据的字节大小。此值应小于缓存值中当前的字节数。
wiredTiger.cache.pages read into cache
Number of pages read into the cache. wiredTiger.cache.pages read into cache with the wiredTiger.cache.pages written from cache can provide an overview of the I/O activity.
wiredTiger.cache.pages written from cache
Number of pages written from the cache. 从缓存中写入的页数。wiredTiger.cache.pages written from cache with the wiredTiger.cache.pages read into cache can provide an overview of the I/O activity.通过wiredTiger.cache.pages读入缓存,可以提供I/O活动的概览。

To adjust the size of the WiredTiger internal cache, see --wiredTigerCacheSizeGB and storage.wiredTiger.engineConfig.cacheSizeGB. Avoid increasing the WiredTiger internal cache size above its default value. If your use case requires increased internal cache size, see --wiredTigerCacheSizePct and storage.wiredTiger.engineConfig.cacheSizePct.

wiredTiger.connection
A document that returns statistics related to WiredTiger connections.返回与WiredTiger连接相关的统计信息的文档。
wiredTiger.cursor
A document that returns statistics on WiredTiger cursor.返回WiredTiger游标统计信息的文档。
wiredTiger.data-handle
A document that returns statistics on the data handles and sweeps.返回数据句柄和扫描统计信息的文档。
wiredTiger.log

A document that returns statistics on WiredTiger's write ahead log (i.e. the journal).一个返回WiredTiger预写日志(即日志)统计数据的文档。

wiredTiger.reconciliation
A document that returns statistics on the reconciliation process.返回对账过程统计数据的文档。
wiredTiger.session
A document that returns the open cursor count and open session count for the session.返回会话的打开游标计数和打开会话计数的文档。
wiredTiger.thread-yield
A document that returns statistics on yields during page acquisitions.在页面获取过程中返回收益统计数据的文档。
wiredTiger.transaction

A document that returns statistics on transaction checkpoints and operations.返回事务检查点和操作统计信息的文档。

wiredTiger.transaction.transaction checkpoint most recent time (msecs)
Amount of time, in milliseconds, to create the most recent checkpoint. An increase in this value under stead write load may indicate saturation on the I/O subsystem.创建最近检查点的时间量(毫秒)。在稳定写负载下,该值的增加可能表明I/O子系统饱和。