Database Manual / Reference

Log Messages日志消息

Overview概述

As part of normal operation, MongoDB maintains a running log of events, including entries such as incoming connections, commands run, and issues encountered. Generally, log messages are useful for diagnosing issues, monitoring your deployment, and tuning performance.作为正常操作的一部分,MongoDB维护一个事件运行日志,包括传入连接、运行命令和遇到的问题等条目。通常,日志消息对于诊断问题、监控部署和调整性能非常有用。

To get your log messages, you can use any of the following methods:要获取日志消息,您可以使用以下任何方法:

Structured Logging结构化日志记录

mongod / mongos instances output all log messages in structured JSON format. mongod/mongos实例以结构化JSON格式输出所有日志消息。Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as "severity", and each corresponding value records the associated logging information for that field type, such as "informational". Previously, log entries were output as plaintext.日志条目以一系列键值对的形式编写,其中每个键表示一个日志消息字段类型,如“严重性”,每个相应的值记录该字段类型的相关日志信息,如“信息性”。以前,日志条目以明文形式输出。

Example示例

The following is an example log message in JSON format as it would appear in the MongoDB log file:以下是一个JSON格式的日志消息示例,它将出现在MongoDB日志文件中:

{"t":{"$date":"2020-05-01T15:16:17.180+00:00"},"s":"I", "c":"NETWORK", "id":12345, "ctx":"listener", "svc": "R", "msg":"Listening on", "attr":{"address":"127.0.0.1"}}

JSON log entries can be pretty-printed for readability. Here is the same log entry pretty-printed:JSON日志条目可以很好地打印出来以提高可读性。这是同样的日志条目,打印得很漂亮

{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}

In this log entry, for example, the key s, representing severity, has a corresponding value of I, representing "Informational", and the key c, representing component, has a corresponding value of NETWORK, indicating that the "network" component was responsible for this particular message. 例如,在此日志条目中,表示严重性的键s具有表示“信息”的相应值I,表示组件的键c具有表示“网络”组件负责此特定消息的相应值NETWORKThe various field types are presented in detail in the Log Message Field Types section.日志消息字段类型部分详细介绍了各种字段类型。

Structured logging with key-value pairs allows for efficient parsing by automated tools or log ingestion services, and makes programmatic search and analysis of log messages easier to perform. Examples of analyzing structured log messages can be found in the Parsing Structured Log Messages section.具有键值对的结构化日志记录允许通过自动化工具或日志摄取服务进行高效解析,并使日志消息的程序化搜索和分析更容易执行。分析结构化日志消息的示例可以在解析结构化日志消息部分找到。

Note

The mongod quits if it's unable to write to the log file. To ensure that mongod can write to the log file, verify that the log volume has space on the disk and the logs are rotated.如果mongod无法写入日志文件,它将退出。为了确保mongod可以写入日志文件,请验证日志卷在磁盘上是否有空间,以及日志是否被轮换。

JSON Log Output FormatJSON日志输出格式

All log output is in JSON format including output sent to:所有日志输出均为JSON格式,包括发送到以下地址的输出:

Output from the getLog command is also in JSON format.getLog命令的输出也是JSON格式。

Each log entry is output as a self-contained JSON object which follows the Relaxed Extended JSON v2.0 specification, and has the following layout and field order:每个日志条目都作为一个自包含的JSON对象输出,该对象遵循Relaxed Extended JSON v2.0规范,并具有以下布局和字段顺序:

{
"t": <Datetime>, // timestamp
"s": <String>, // severity
"c": <String>, // component
"id": <Integer>, // unique identifier
"ctx": <String>, // context
"svc": <String>, // service
"msg": <String>, // message body
"attr": <Object>, // additional attributes (optional)
"tags": <Array of strings>, // tags (optional)
"truncated": <Object>, // truncation info (if truncated)
"size": <Object> // original size of entry (if truncated)
}

Field descriptions:字段描述:

Field Name字段名称Type类型Description描述
tDatetime日期时间Timestamp of the log message in ISO-8601 format. For an example, see Timestamp.ISO-8601格式的日志消息的时间戳。例如,请参阅时间戳
sString字符串Short severity code of the log message. For an example, see Severity.日志消息的短严重性代码。例如,请参阅严重性
cString字符串Full component string for the log message. For an example, see Components.日志消息的完整组件字符串。例如,请参阅组件
idInteger整数Unique identifier for the log statement. For an example, see Filtering by Known Log ID.日志语句的唯一标识符。例如,请参阅按已知日志ID筛选
ctxString字符串Name of the thread that caused the log statement.导致日志语句的线程的名称。
svcString字符串Name of the service in whose context the log statement was made. Will be S for "shard", R "router", or - for "unknown" or "none".在其上下文中生成日志语句的服务的名称。S表示“分片”,R表示“路由器”,或-表示“未知”或“无”。
msgString字符串Log output message passed from the server or driver. If necessary, the message is escaped according to the JSON specification.从服务器或驱动程序传递的日志输出消息。如有必要,将根据JSON规范转义消息。
attrObject对象One or more key-value pairs for additional log attributes. If a log message does not include any additional attributes, the attr object is omitted.用于附加日志属性的一个或多个键值对。如果日志消息不包括任何其他属性,则省略attr对象。
Attribute values may be referenced by their key name in the msg message body, depending on the message. If necessary, the attributes are escaped according to the JSON specification.根据消息的不同,属性值可以在msg消息体中通过其键名引用。如有必要,将根据JSON规范转义属性。
tagsArray of strings字符串数组Strings representing any tags applicable to the log statement. For example, ["startupWarnings"].表示适用于日志语句的任何标签的字符串。例如,["startupWarnings"]
truncatedObject对象Information about the log message truncation, if applicable. Only included if the log entry contains at least one truncated attr attribute.有关日志消息截断的信息(如果适用)。仅当日志条目包含至少一个截断的attr属性时才包括在内。
sizeObject对象Original size of a log entry if it has been truncated. Only included if the log entry contains at least one truncated attr attribute.日志条目的原始大小(如果已被截断)。仅当日志条目包含至少一个截断的attr属性时才包括在内。

Escaping转义

The message and attributes fields will escape control characters as necessary according to the Relaxed Extended JSON v2.0 specification:根据Relaxed Extended JSON v2.0规范,消息属性字段将根据需要转义控制字符:

Character Represented代表字符Escape Sequence转义序列
Quotation Mark引号 (")\"
Backslash反斜杠 (\)\\
Backspace退格 (0x08)\b
Formfeed垂直制表 (0x0C)\f
Newline换行 (0x0A)\n
Carriage return回车 (0x0D)\r
Horizontal tab制表符 (0x09)\t

Control characters not listed above are escaped with \uXXXX where "XXXX" is the unicode codepoint in hexadecimal. Bytes with invalid UTF-8 encoding are replaced with the unicode replacement character represented by \ufffd.上面未列出的控制字符用\uXXXX转义,其中“XXXX”是十六进制的unicode代码点。UTF-8编码无效的字节将被替换为\ufffd表示的unicode替换字符。

An example of message escaping is provided in the examples section.示例部分提供了消息转义的示例。

Truncation截断

Changed in version 7.3.在版本7.3中的更改。

Any attributes that exceed the maximum size defined with maxLogSizeKB (default: 10 KB) are truncated. Truncated attributes omit log data beyond the configured limit, but retain the JSON formatting of the entry to ensure that the entry remains parsable.任何超过maxLogSizeKB定义的最大大小(默认值:10 KB)的属性都将被截断。截断的属性省略了超出配置限制的日志数据,但保留了条目的JSON格式,以确保条目的可解析性。

For example, the following JSON object represents a command attribute that contains 5000 elements in the $in field without truncation.例如,以下JSON对象表示一个command属性,该属性在$in字段中包含5000个元素,没有截断。

Note

The example log entries are reformatted for readability.为了可读性,对示例日志条目进行了重新格式化。

{
"command": {
"find": "mycoll",
"filter": {
"value1": {
"$in": [0, 1, 2, 3, ... 4999]
},
"value2": "foo"
},
"sort": { "value1": 1 },
"lsid":{"id":{"$uuid":"80a99e49-a850-467b-a26d-aeb2d8b9f42b"}},
"$db": "testdb"
}
}

In this example, the $in array is truncated at the 376th element because the size of the command attribute would exceed maxLogSizeKB if it included the subsequent elements. The remainder of the command attribute is omitted. The truncated log entry resembles the following output:在这个例子中,$in数组在第376个元素处被截断,因为如果command属性包含后续元素,其大小将超过maxLogSizeKBcommand属性的其余部分被省略。截断的日志条目类似于以下输出:

{
"t": { "$date": "2021-03-17T20:30:07.212+01:00" },
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn9",
"msg": "Slow query",
"attr": {
"command": {
"find": "mycoll",
"filter": {
"value1": {
"$in": [ 0, 1, ..., 376 ] // Values in array omitted for brevity为简洁起见,省略了数组中的值
}
}
},
... // Other attr fields omitted for brevity为简洁起见,省略了其他attr字段
},
"truncated": {
"command": {
"truncated": {
"filter": {
"truncated": {
"value1": {
"truncated": {
"$in": {
"truncated": {
"377": {
"type": "double",
"size": 8
}
},
"omitted": 4623
}
}
}
},
"omitted": 1
}
},
"omitted": 3
}
},
"size": {
"command": 21692
}
}

Log entries containing one or more truncated attributes include nested truncated objects, which provide the following information for each truncated attribute in the log entry:包含一个或多个截断属性的日志条目包括嵌套的截断对象,这些对象为日志条目中的每个截断属性提供以下信息:

  • The attribute that was truncated被截断的属性
  • The specific sub-object of that attribute that triggered truncation, if applicable触发截断的该属性的特定子对象(如果适用)
  • The data type of the truncated field截断字段的数据type
  • The size, in bytes, of the element that triggers truncation触发截断的元素的size(以字节为单位)
  • The number of elements that were omitted under each sub-object due to truncation由于截断而在每个子对象下omitted的元素数量

Log entries with truncated attributes may also include an additional size field at the end of the entry which indicates the original size of the attribute before truncation, in this case 21692 or about 22KB. 具有截断属性的日志条目还可以在条目末尾包括一个额外的size字段,该字段指示截断前属性的原始大小,在这种情况下为21692或约22KB。This final size field is only shown if it is different from the size field in the truncated object, i.e. if the total object size of the attribute is different from the size of the truncated sub-object, as is the case in the example above.此最终size字段仅在与truncated对象中的size字段不同时显示,即如果属性的总对象大小与截断的子对象大小不同,如上例所示。

Padding填充

When output to the file or the syslog log destinations, padding is added after the severity, context, and id fields to increase readability when viewed with a fixed-width font.当输出到文件或syslog日志目标时,在严重性、上下文和id字段后添加填充,以提高使用固定宽度字体查看时的可读性。

The following MongoDB log file excerpt demonstrates this padding:以下MongoDB日志文件摘录演示了这种填充:

{"t":{"$date":"2020-05-18T20:18:12.724+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main", "svc": "R", "msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main", "svc": "R", "msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main", "svc": "R", "msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten", "svc": "R", "msg":"MongoDB starting", "attr":{"pid":10111,"port":27001,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"centos8"}}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten", "svc": "R", "msg":"Build Info", "attr":{"buildInfo":{"version":"4.4.0","gitVersion":"328c35e4b883540675fb4b626c53a08f74e43cf0","openSSLVersion":"OpenSSL 1.1.1c FIPS 28 May 2019","modules":[],"allocator":"tcmalloc","environment":{"distmod":"rhel80","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten", "svc": "R", "msg":"Operating System", "attr":{"os":{"name":"CentOS Linux release 8.0.1905 (Core) ","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}

Pretty Printing漂亮打印

When working with MongoDB structured logging, you can use the third-party jq command-line utility for easy pretty-printing of log entries, and powerful key-based matching and filtering.使用MongoDB结构化日志记录时,您可以使用第三方jq命令行实用程序轻松打印日志条目,以及强大的基于键的匹配和筛选。

jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.是一个开源JSON解析器,适用于Linux、Windows和macOS。

You can use jq to pretty-print log entries as follows:您可以使用jq按如下方式漂亮地打印日志条目:

  • Pretty-print the entire log file:打印整个日志文件:

    cat mongod.log | jq
  • Pretty-print the most recent log entry:打印最近的日志条目:

    cat mongod.log | tail -1 | jq

More examples of working with MongoDB structured logs are available in the Parsing Structured Log Messages section.解析结构化日志消息部分提供了更多使用MongoDB结构化日志的示例。

Configuring Log Message Destinations配置日志消息目标

MongoDB log messages can be output to file, syslog, or stdout (standard output).MongoDB日志消息可以输出到文件syslogstdout(标准输出)。

To configure the log output destination, use one of the following settings, either in the configuration file or on the command-line:要配置日志输出目标,请在配置文件或命令行中使用以下设置之一:

Configuration file:配置文件:
Command-line:命令行:

Not specifying either file or syslog sends all logging output to stdout.不指定文件syslog会将所有日志输出发送到stdout

For the full list of logging settings and options see:有关日志设置和选项的完整列表,请参阅:

Configuration file:配置文件:
Command-line:命令行:

Note

Error messages sent to stderr (standard error), such as fatal errors during startup when not using the file or syslog log destinations, or messages having to do with misconfigured logging settings, are not affected by the log output destination setting, and are printed to stderr in plaintext format.发送到stderr的错误消息(标准错误),例如启动时不使用文件syslog日志目标时发生的致命错误,或与配置错误的日志设置有关的消息,不受日志输出目标设置的影响,并以明文格式打印到stderr。

Log Message Field Types日志消息字段类型

Timestamp时间戳

The timestamp field type indicates the precise date and time at which the logged event occurred.时间戳字段类型表示记录事件发生的精确日期和时间。

{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}

When logging to file or to syslog [1], the default format for the timestamp is iso8601-local. 当记录到文件syslog[1]时,时间戳的默认格式是iso8601-localTo modify the timestamp format, use the --timeStampFormat runtime option or the systemLog.timeStampFormat setting.要修改时间戳格式,请使用--timeStampFormat运行时选项或systemLog.timeStampFormat设置。

See Filtering by Date Range for log parsing examples that filter on the timestamp field.有关在时间戳字段上筛选的日志解析示例,请参阅按日期范围筛选

Note

The ctime timestamp format is no longer supported.不再支持ctime时间戳格式。

[1] If logging to syslog, the syslog daemon generates timestamps when it logs a message, not when MongoDB issues the message. 如果记录到syslogsyslog守护进程会在记录消息时生成时间戳,而不是在MongoDB发出消息时生成。This can lead to misleading timestamps for log entries, especially when the system is under heavy load.这可能会导致日志条目的时间戳产生误导,尤其是在系统负载过重的情况下。

Severity严重性

The severity field type indicates the severity level associated with the logged event.严重性字段类型表示与记录的事件相关的严重性级别。

{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}

Severity levels range from "Fatal" (most severe) to "Debug" (least severe):严重性级别从“致命”(最严重)到“调试”(最不严重):

Level水平Description描述
FFatal
E

Error

For more information on error logging, see Error Codes.有关错误记录的更多信息,请参阅错误代码

WWarning警告
IInformational, for verbosity level 0信息性,用于详细程度0
D1 - D5

Debug, for verbosity levels > 0调试,用于详细程度>0

MongoDB indicates the specific debug verbosity level. For example, if verbosity level is 2, MongoDB indicates D2.MongoDB表示特定的调试详细程度。例如,如果详细程度为2,MongoDB表示D2

In previous versions, MongoDB log messages specified D for all debug verbosity levels.在以前的版本中,MongoDB日志消息为所有调试详细程度指定了D

You can specify the verbosity level of various components to determine the amount of Informational and Debug messages MongoDB outputs. Severity categories above these levels are always shown. 您可以指定各种组件的详细程度,以确定MongoDB输出的信息和调试消息的数量。始终显示高于这些级别的严重性类别。[2] To set verbosity levels, see Configure Log Verbosity Levels.要设置详细程度,请参阅配置日志详细程度

Components组件

The component field type indicates the category a logged event is a member of, such as NETWORK or COMMAND.组件字段类型表示记录的事件所属的类别,如网络命令

{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}

Each component is individually configurable via its own verbosity filter. The available components are as follows:每个组件都可以通过自己的详细程度筛选器单独配置。可用组件如下:

ACCESS
Messages related to access control, such as authentication. To specify the log level for ACCESS components, use the systemLog.component.accessControl.verbosity setting.与访问控制相关的消息,如身份验证。要指定ACCESS组件的日志级别,请使用systemLog.component.accessControl.verbosity设置。
ASSERT
An assertion is triggered when an operation returns an error. 当操作返回错误时,会触发断言。The default verbosity level is 0. However, the verbosity setting must be at least 1 in order for operations that return errors to be included in the system logs. 默认的详细程度为0。但是,为了将返回错误的操作包含在系统日志中,详细程度设置必须至少为1 To specify the log level for ACCESS components, use the assert setting.要指定ACCESS组件的日志级别,请使用assert设置。
COMMAND
Messages related to database commands, such as count. 数据库命令相关的消息,如countTo specify the log level for COMMAND components, use the systemLog.component.command.verbosity setting.要指定COMMAND组件的日志级别,请使用systemLog.component.command.verbosity设置。
CONTROL
Messages related to control activities, such as initialization. To specify the log level for CONTROL components, use the systemLog.component.control.verbosity setting.与控制活动相关的消息,如初始化。要指定CONTROL组件的日志级别,请使用systemLog.component.control.verbosity设置。
ELECTION

Messages related specifically to replica set elections. To specify the log level for ELECTION components, set the systemLog.component.replication.election.verbosity parameter.与副本集选举特别相关的消息。要指定ELECTION组件的日志级别,请设置systemLog.component.replication.election.verbosity参数。

REPL is the parent component of ELECTION. If systemLog.component.replication.election.verbosity is unset, MongoDB uses the REPL verbosity level for ELECTION components.REPLELECTION的父组件。如果未设置systemLog.component.replication.election.verbosity,MongoDB将对ELECTION组件使用REPL verbosity级别。

FTDC
Messages related to the diagnostic data collection mechanism, such as server statistics and status messages. To specify the log level for FTDC components, use the systemLog.component.ftdc.verbosity setting.与诊断数据集合机制相关的消息,如服务器统计信息和状态消息。要指定FTDC组件的日志级别,请使用systemLog.component.ftdc.verbosity设置。
GEO
Messages related to the parsing of geospatial shapes, such as verifying the GeoJSON shapes. 与地理空间形状解析相关的消息,例如验证GeoJSON形状。To specify the log level for GEO components, set the systemLog.component.geo.verbosity parameter.要指定GEO组件的日志级别,请设置systemLog.component.geo.verbosity参数。
INDEX
Messages related to indexing operations, such as creating indexes. To specify the log level for INDEX components, set the systemLog.component.index.verbosity parameter.与索引操作(如创建索引)相关的消息。要指定INDEX组件的日志级别,请设置systemLog.component.index.verbosity参数。
INITSYNC

Messages related to initial sync operation. To specify the log level for INITSYNC components, set the systemLog.component.replication.initialSync.verbosity parameter.与初始同步操作相关的消息。要指定INITSYNC组件的日志级别,请设置systemLog.component.replication.initialSync.verbosity参数。

REPL is the parent component of INITSYNC. If systemLog.component.replication.initialSync.verbosity is unset, MongoDB uses the REPL verbosity level for INITSYNC components.REPLINITSYNC的父组件。如果未设置systemLog.component.replication.initialSync.verbosity,MongoDB将对INITSYNC组件使用REPL verbosity级别。

JOURNAL

Messages related specifically to storage journaling activities. To specify the log level for JOURNAL components, use the systemLog.component.storage.journal.verbosity setting.

STORAGE is the parent component of JOURNAL. If systemLog.component.storage.journal.verbosity is unset, MongoDB uses the STORAGE verbosity level for JOURNAL components.

NETWORK
Messages related to network activities, such as accepting connections. To specify the log level for NETWORK components, set the systemLog.component.network.verbosity parameter.
QUERY
Messages related to queries, including query planner activities. To specify the log level for QUERY components, set the systemLog.component.query.verbosity parameter.
QUERYSTATS
Messages related to $queryStats operations. To specify the log level for QUERYSTATS components, set the systemLog.component.queryStats.verbosity parameter.
RECOVERY

Messages related to storage recovery activities. To specify the log level for RECOVERY components, use the systemLog.component.storage.recovery.verbosity setting.

STORAGE is the parent component of RECOVERY. If systemLog.component.storage.recovery.verbosity is unset, MongoDB uses the STORAGE verbosity level for RECOVERY components.

REJECTED

New in version 8.0.在版本8.0中新增。

Messages related to rejected query operations.

To specify the log level for REJECTED component messages, set the systemLog.component.query.rejected.verbosity parameter.

MongoDB only logs the REJECTED component messages if the verbosity level is set to 2 or higher.

The parent component for REJECTED is QUERY.

REPL

Messages related to replica sets, such as initial sync, heartbeats, steady state replication, and rollback. [2] To specify the log level for REPL components, set the systemLog.component.replication.verbosity parameter.

REPL is the parent component of the ELECTION, INITSYNC, REPL_HB, and ROLLBACK components.

REPL_HB

Messages related specifically to replica set heartbeats. To specify the log level for REPL_HB components, set the systemLog.component.replication.heartbeats.verbosity parameter.

REPL is the parent component of REPL_HB. If systemLog.component.replication.heartbeats.verbosity is unset, MongoDB uses the REPL verbosity level for REPL_HB components.

ROLLBACK

Messages related to rollback operations. To specify the log level for ROLLBACK components, set the systemLog.component.replication.rollback.verbosity parameter.

REPL is the parent component of ROLLBACK. If systemLog.component.replication.rollback.verbosity is unset, MongoDB uses the REPL verbosity level for ROLLBACK components.

SHARDING
Messages related to sharding activities, such as the startup of the mongos. To specify the log level for SHARDING components, use the systemLog.component.sharding.verbosity setting.
STORAGE

Messages related to storage activities, such as processes involved in the fsync command. To specify the log level for STORAGE components, use the systemLog.component.storage.verbosity setting.

STORAGE is the parent component of JOURNAL and RECOVERY.

TXN
Messages related to multi-document transactions. To specify the log level for TXN components, use the systemLog.component.transaction.verbosity setting.
WRITE
Messages related to write operations, such as update commands. To specify the log level for WRITE components, use the systemLog.component.write.verbosity setting.
WT

New in version 5.3.在版本5.3中新增。

Messages related to the WiredTiger storage engine. To specify the log level for WT components, use the systemLog.component.storage.wt.verbosity setting.

WTBACKUP

New in version 5.3.在版本5.3中新增。

Messages related to backup operations performed by the WiredTiger storage engine. To specify the log level for the WTBACKUP components, use the systemLog.component.storage.wt.wtBackup.verbosity setting.

WTCHKPT

New in version 5.3.在版本5.3中新增。

Messages related to checkpoint operations performed by the WiredTiger storage engine. To specify the log level for WTCHKPT components, use the systemLog.component.storage.wt.wtCheckpoint.verbosity setting.

WTCMPCT

New in version 5.3.在版本5.3中新增。

Messages related to compaction operations performed by the WiredTiger storage engine. To specify the log level for WTCMPCT components, use the systemLog.component.storage.wt.wtCompact.verbosity setting.

WTEVICT

New in version 5.3.在版本5.3中新增。

Messages related to eviction operations performed by the WiredTiger storage engine. To specify the log level for WTEVICT components, use the systemLog.component.storage.wt.wtEviction.verbosity setting.

WTHS

New in version 5.3.在版本5.3中新增。

Messages related to the history store of the WiredTiger storage engine. To specify the log level for WTHS components, use the systemLog.component.storage.wt.wtHS.verbosity setting.

WTRECOV

New in version 5.3.在版本5.3中新增。

Messages related to recovery operations performed by the WiredTiger storage engine. To specify the log level for WTRECOV components, use the systemLog.component.storage.wt.wtRecovery.verbosity setting.

WTRTS

New in version 5.3.在版本5.3中新增。

Messages related to rollback to stable (RTS) operations performed by the WiredTiger storage engine. To specify the log level for WTRTS components, use the systemLog.component.storage.wt.wtRTS.verbosity setting.

WTSLVG

New in version 5.3.在版本5.3中新增。

Messages related to salvage operations performed by the WiredTiger storage engine. To specify the log level for WTSLVG components, use the systemLog.component.storage.wt.wtSalvage.verbosity setting.

WTTS

New in version 5.3.在版本5.3中新增。

Messages related to timestamps used by the WiredTiger storage engine. To specify the log level for WTTS components, use the systemLog.component.storage.wt.wtTimestamp.verbosity setting.

WTTXN

New in version 5.3.在版本5.3中新增。

Messages related to transactions performed by the WiredTiger storage engine. To specify the log level for WTTXN components, use the systemLog.component.storage.wt.wtTransaction.verbosity setting.

WTVRFY

New in version 5.3.在版本5.3中新增。

Messages related to verification operations performed by the WiredTiger storage engine. To specify the log level for WTVRFY components, use the systemLog.component.storage.wt.wtVerify.verbosity setting.

WTWRTLOG

New in version 5.3.在版本5.3中新增。

Messages related to log write operations performed by the WiredTiger storage engine. To specify the log level for WTWRTLOG components, use the systemLog.component.storage.wt.wtWriteLog.verbosity setting.

-
Messages not associated with a named component. Unnamed components have the default log level specified in the systemLog.verbosity setting. The systemLog.verbosity setting is the default setting for both named and unnamed components.

See Filtering by Component for log parsing examples that filter on the component field.

Client Data

MongoDB Drivers and client applications (including mongosh) have the ability to send identifying information at the time of connection to the server. After the connection is established, the client does not send the identifying information again unless the connection is dropped and reestablished.

This identifying information is contained in the attributes field of the log entry. The exact information included varies by client.

Below is a sample log message containing the client data document as transmitted from a mongosh connection. The client data is contained in the doc object in the attributes field:

{"t":{"$date":"2020-05-20T16:21:31.561+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn202", "svc": "R", "msg":"client metadata", "attr":{"remote":"127.0.0.1:37106","client":"conn202","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}

When secondary members of a replica set initiate a connection to a primary, they send similar data. A sample log message containing this initiation connection might appear as follows. The client data is contained in the doc object in the attributes field:

{"t":{"$date":"2020-05-20T16:33:40.595+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn214", "svc": "R", "msg":"client metadata", "attr":{"remote":"127.0.0.1:37176","client":"conn214","doc":{"driver":{"name":"NetworkInterfaceTL","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}

See the examples section for a pretty-printed example showing client data.

For a complete description of client information and required fields, see the MongoDB Handshake specification.

Verbosity Levels

You can specify the logging verbosity level to increase or decrease the amount of log messages MongoDB outputs. Verbosity levels can be adjusted for all components together, or for specific named components individually.

Verbosity affects log entries in the severity categories Informational and Debug only. Severity categories above these levels are always shown.

You might set verbosity levels to a high value to show detailed logging for debugging or development, or to a low value to minimize writes to the log on a vetted production deployment. [2]

View Current Log Verbosity Level

To view the current verbosity levels, use the db.getLogComponents() method:

db.getLogComponents()

Your output might resemble the following:

{
"verbosity" : 0,
"accessControl" : {
"verbosity" : -1
},
"command" : {
"verbosity" : -1
},
...
"storage" : {
"verbosity" : -1,
"recovery" : {
"verbosity" : -1
},
"journal" : {
"verbosity" : -1
}
},
...

The initial verbosity entry is the parent verbosity level for all components, while the individual named components that follow, such as accessControl, indicate the specific verbosity level for that component, overriding the global verbosity level for that particular component if set.

A value of -1, indicates that the component inherits the verbosity level of their parent, if they have one (as with recovery above, inheriting from storage), or the global verbosity level if they do not (as with command). Inheritance relationships for verbosity levels are indicated in the components section.

Configure Log Verbosity Levels

You can configure the verbosity level using: the systemLog.verbosity and systemLog.component.<name>.verbosity settings, the logComponentVerbosity parameter, or the db.setLogLevel() method. [2]

systemLog Verbosity Settings

To configure the default log level for all components, use the systemLog.verbosity setting. To configure the level of specific components, use the systemLog.component.<name>.verbosity settings.

For example, the following configuration sets the systemLog.verbosity to 1, the systemLog.component.query.verbosity to 2, the systemLog.component.storage.verbosity to 2, and the systemLog.component.storage.journal.verbosity to 1:

systemLog:
verbosity: 1
component:
query:
verbosity: 2
storage:
verbosity: 2
journal:
verbosity: 1

You would set these values in the configuration file or on the command line for your mongod or mongos instance.

All components not specified explicitly in the configuration have a verbosity level of -1, indicating that they inherit the verbosity level of their parent, if they have one, or the global verbosity level (systemLog.verbosity) if they do not.

logComponentVerbosity Parameter

To set the logComponentVerbosity parameter, pass a document with the verbosity settings to change.

For example, the following sets the default verbosity level to 1, the query to 2, the storage to 2, and the storage.journal to 1.

db.adminCommand( {
setParameter: 1,
logComponentVerbosity: {
verbosity: 1,
query: {
verbosity: 2
},
storage: {
verbosity: 2,
journal: {
verbosity: 1
}
}
}
} )

You would set these values from mongosh.

db.setLogLevel()

Use the db.setLogLevel() method to update a single component log level. For a component, you can specify verbosity level of 0 to 5, or you can specify -1 to inherit the verbosity of the parent. For example, the following sets the systemLog.component.query.verbosity to its parent verbosity (i.e. default verbosity):

db.setLogLevel(-1, "query")

You would set this value from mongosh.

[2](1, 2, 3, 4, 5) Secondary members of a replica set now log oplog entries that take longer than the slow operation threshold to apply. These slow oplog messages:
  • Are logged for the secondaries in the diagnostic log.
  • Are logged under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Do not depend on the log levels (either at the system or component level)
  • Do not depend on the profiling level.
  • Are affected by slowOpSampleRate.
The profiler does not capture slow oplog entries.

Logging Slow Operations

Client operations (such as queries) appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher. [2] These log entries include the full command object associated with the operation.

The profiler entries and the diagnostic log messages (i.e. mongod/mongos logmessages) for read/write operations include:

  • planCacheShapeHash to help identify slow queries with the same plan cache query shape.

    Starting in MongoDB 8.0, the existing queryHash field is duplicated in a new field named planCacheShapeHash. If you're using an earlier MongoDB version, you'll only see the queryHash field. Future MongoDB versions will remove the deprecated queryHash field, and you'll need to use the planCacheShapeHash field instead.

  • planCacheKey to provide more insight into the query plan cache for slow queries.

Important

A single operation may log more than one entry. For example, if more than one write in a bulk write operation exceeds the slow operation threshold, each slow write is logged separately.

Version-Specific Changes

The following table lists the changes to logging slow queries.

MongoDB VersionChanges
6.1Slow operation log messages include cache refresh time fields.
6.2

Slow operation log messages include a queryFramework field that indicates which query engine executed the query:

  • queryFramework: "classic" indicates that the classic engine executed the query.
  • queryFramework: "sbe" indicates that the slot-based query execution engine executed the query.
6.3Slow operation log messages and database profiler entries include a cpuNanos field that specifies the total CPU time spent by a query operation in nanoseconds. The cpuNanos field is only available on Linux systems.
7.0 (and 6.0.13, 5.0.24)

The totalOplogSlotDurationMicros in the slow query log message shows the time between a write operation getting a commit timestamp to commit the storage engine writes and actually committing. mongod supports parallel writes. However, it commits write operations with commit timestamps in any order.

For example, consider the following writes with commit timestamps:

  • writeA with Timestamp1
  • writeB with Timestamp2
  • writeC with Timestamp3

Suppose writeB commits first at Timestamp2. Replication is paused until writeA commits because writeA's oplog entry with Timestamp1 is required for replication to copy the oplog to secondary replica set members.

8.0

The slow query output includes a queues document that contains information about the operation's queues. Each queue in the queues field contains a totalTimeQueuedMicros field that contains the total cumulative time, in microseconds, that the operation spent in the corresponding queue.

The queryShapeHash field for a query shape is also included in the slow query log when available.

If a command with a specific query shape is rejected, MongoDB logs a message that states the query command was rejected. The message contains the query namespace, queryShapeHash, and the command with the rejected query. MongoDB only logs the message if the log verbosity level is set to 2 or higher.

8.1

Slow query log messages contain new metrics if the query execution writes temporary files to disk. These metrics are prefixed by the query execution stage that caused the query to exceed the memory limit. For example, sortSpills indicates the number of times that the sort stage of query execution wrote temporary files to disk.

  • <executionPart>Spills indicates the number of times the corresponding query execution stage wrote temporary files to disk.
  • <executionPart>SpilledBytes indicates the size, in bytes, of the memory released by writing temporary files to disk.
  • <executionPart>SpilledDataStorageSize indicates the size, in bytes, of disk space used for temporary files.
  • <executionPart>SpilledRecords indicates the number of records written to temporary files on disk.

For more information on writing temporary files to disk, see allowDiskUse().

For a pretty-printed example of a slow operation log entry, see Log Message Examples.

Time Waiting for Shards Logged in remoteOpWaitMillis Field

New in version 5.0.在版本5.0中新增。

Starting in MongoDB 5.0, you can use the remoteOpWaitMillis log field to obtain the wait time (in milliseconds) for results from shards.

remoteOpWaitMillis is only logged:

To determine if a merge operation or a shard issue is causing a slow query, compare the workingMillis and remoteOpWaitMillis time fields in the log. workingMillis is the total time the query took to complete. Specifically:

  • If workingMillis is slightly longer than remoteOpWaitMillis, then most of the time was spent waiting for a shard response. For example, workingMillis of 17 and remoteOpWaitMillis of 15.
  • If workingMillis is significantly longer than remoteOpWaitMillis, then most of the time was spent performing the merge. For example, workingMillis of 100 and remoteOpWaitMillis of 15.

Log Redaction

Queryable Encryption Log Redaction

When using Queryable Encryption, CRUD operations against encrypted collections are omitted from the slow query log. For details, see Queryable Encryption redaction.

Enterprise Log Redaction

Available in MongoDB Enterprise only

A mongod or mongos running with redactClientLogData redacts any message accompanying a given log event before logging, leaving only metadata, source files, or line numbers related to the event. redactClientLogData prevents potentially sensitive information from entering the system log at the cost of diagnostic detail.

For example, the following operation inserts a document into a mongod running without log redaction. The mongod has the log verbosity level set to 1:

db.clients.insertOne( { "name" : "Joe", "PII" : "Sensitive Information" } )

This operation produces the following log event:

{
"t": { "$date": "2024-07-19T15:36:55.024-07:00" },
"s": "I",
"c": "COMMAND",
...
"attr": {
"type": "command",
...
"appName": "mongosh 2.2.10",
"command": {
"insert": "clients",
"documents": [
{
"name": "Joe",
"PII": "Sensitive Information",
"_id": { "$oid": "669aea8792c7fd822d3e1d8c" }
}
],
"ordered": true,
...
}
...
}
}

When mongod runs with redactClientLogData and performs the same insert operation, it produces the following log event:

{
"t": { "$date": "2024-07-19T15:36:55.024-07:00" },
"s": "I",
"c": "COMMAND",
...
"attr": {
"type": "command",
...
"appName": "mongosh 2.2.10",
"command": {
"insert": "###",
"documents": [
{
"name": "###",
"PII": "###",
"_id": "###"
}
],
"ordered": "###",
...
}
...
}
}

Use redactClientLogData in conjunction with Encryption at Rest and TLS/SSL (Transport Encryption) to assist compliance with regulatory requirements.

Parsing Structured Log Messages

Log parsing is the act of programmatically searching through and analyzing log files, often in an automated manner. With the introduction of structured logging, log parsing is made simpler and more powerful. For example:

  • Log message fields are presented as key-value pairs. Log parsers can query by specific keys of interest to efficiently filter results.
  • Log messages always contain the same message structure. Log parsers can reliably extract information from any log message, without needing to code for cases where information is missing or formatted differently.

The following examples demonstrate common log parsing workflows when working with MongoDB JSON log output.

Log Parsing Examples

When working with MongoDB structured logging, you can use the third-party jq command-line utility for easy pretty-printing of log entries, and powerful key-based matching and filtering.

jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.

These examples use jq to simplify log parsing.

Counting Unique Messages

The following example shows the top 10 unique message values in a given log file, sorted by frequency:

jq -r ".msg" /var/log/mongodb/mongod.log | sort | uniq -c | sort -rn | head -10

Monitoring Connections

Remote client connections are shown in the log under the "remote" key in the attribute object. The following counts all unique connections over the course of the log file and presents them in descending order by number of occurrences:

jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | sort | uniq -c | sort -r

Note that connections from the same IP address, but connecting over different ports, are treated as different connections by this command. You could limit output to consider IP addresses only, with the following change:

jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | awk -F':' '{print $1}' | sort | uniq -c | sort -r

Analyzing Driver Connections

The following example counts all remote MongoDB driver connections, and presents each driver type and version in descending order by number of occurrences:

jq -cr '.attr.doc.driver' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn

Analyzing Client Types

The following example analyzes the reported client data of remote MongoDB driver connections and client applications, including mongosh, and prints a total for each unique operating system type that connected, sorted by frequency:

jq -r '.attr.doc.os.type' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn

The string "Darwin", as reported in this log field, represents a macOS client.

Analyzing Slow Queries

With slow operation logging enabled, the following returns only the slow operations that took above 2000 milliseconds:, for further analysis:

jq 'select(.attr.workingMillis>=2000)' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.

Filtering by Component

Log components (the third field in the JSON log output format) indicate the general category a given log message falls under. Filtering by component is often a great starting place when parsing log messages for relevant events.

The following example prints only the log messages of component type REPL:

jq 'select(.c=="REPL")' /var/log/mongodb/mongod.log

The following example prints all log messages except those of component type REPL:

jq 'select(.c!="REPL")' /var/log/mongodb/mongod.log

The following example print log messages of component type REPL or STORAGE:

jq 'select( .c as $c | ["REPL", "STORAGE"] | index($c) )' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.

Filtering by Known Log ID

Log IDs (the fifth field in the JSON log output format) map to specific log events, and can be relied upon to remain stable over successive MongoDB releases.

As an example, you might be interested in the following two log events, showing a client connection followed by a disconnection:

{"t":{"$date":"2020-06-01T13:06:59.027-0500"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener", "svc": "R", "msg":"connection accepted from {session_remote} #{session_id} ({connectionCount}{word} now open)", "attr":{"session_remote":"127.0.0.1:61298", "session_id":164,"connectionCount":11,"word":" connections"}}
{"t":{"$date":"2020-06-01T13:07:03.490-0500"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn157", "svc": "R", "msg":"end connection {remote} ({connectionCount}{word} now open)", "attr":{"remote":"127.0.0.1:61298","connectionCount":10,"word":" connections"}}

The log IDs for these two entries are 22943 and 22944 respectively. You could then filter your log output to show only these log IDs, effectively showing only client connection activity, using the following jq syntax:

jq 'select( .id as $id | [22943, 22944] | index($id) )' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.

Filtering by Date Range

Log output can be further refined by filtering on the timestamp field, limiting log entries returned to a specific date range. For example, the following returns all log entries that occurred on April 15th, 2020:

jq 'select(.t["$date"] >= "2020-04-15T00:00:00.000" and .t["$date"] <= "2020-04-15T23:59:59.999")' /var/log/mongodb/mongod.log

Note that this syntax includes the full timestamp, including milliseconds but excluding the timezone offset.

Filtering by date range can be combined with any of the examples above, creating weekly reports or yearly summaries for example. The following syntax expands the "Monitoring Connections" example from earlier to limit results to the month of May, 2020:

jq 'select(.t["$date"] >= "2020-05-01T00:00:00.000" and .t["$date"] <= "2020-05-31T23:59:59.999" and .attr.remote)' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.

Log Ingestion Services

Log ingestion services are third-party products that intake and aggregate log files, usually from a distributed cluster of systems, and provide ongoing analysis of that data in a central location.

The JSON log format allows for more flexibility when working with log ingestion and analysis services. Whereas plaintext logs generally require some manner of transformation before being eligible for use with these products, JSON files can often be consumed out of the box, depending on the service. Further, JSON-formatted logs offer more control when performing filtering for these services, as the key-value structure offers the ability to specifically import only the fields of interest, while omitting the rest.

Consult the documentation for your chosen third-party log ingestion service for more information.

Log Message Examples

The following examples show log messages in JSON output format.

These log messages are presented in pretty-printed format for convenience.

Startup Warning

This example shows a startup warning:

{
"t": {
"$date": "2020-05-20T19:17:06.188+00:00"
},
"s": "W",
"c": "CONTROL",
"id": 22120,
"ctx": "initandlisten",
"svc": "R",
"msg": "Access control is not enabled for the database. Read and write access to data and configuration is unrestricted",
"tags": [
"startupWarnings"
]
}

Client Connection

This example shows a client connection that includes client data:

{
"t": {
"$date": "2020-05-20T19:18:40.604+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 51800,
"ctx": "conn281",
"svc": "R",
"msg": "client metadata",
"attr": {
"remote": "192.168.14.15:37666",
"client": "conn281",
"doc": {
"application": {
"name": "MongoDB Shell"
},
"driver": {
"name": "MongoDB Internal Client",
"version": "4.4.0"
},
"os": {
"type": "Linux",
"name": "CentOS Linux release 8.0.1905 (Core) ",
"architecture": "x86_64",
"version": "Kernel 4.18.0-80.11.2.el8_0.x86_64"
}
}
}
}

Slow Operation

Starting in MongoDB 8.0, slow operations are logged based on the time that MongoDB spends working on that operation, rather than the total latency for the operation.

You can use the metrics in the slow operation log to identify where an operation spends time in its lifecycle, which helps identify possible performance improvements.

In the following example log message:

  • The amount of time spent waiting for resources while executing the query is shown in these metrics:

    • queues.execution.totalTimeQueuedMicros
    • timeAcquiringMicros
  • workingMillis is the amount of time that MongoDB spends working on the operation.
  • durationMillis is the operation's total latency.
{
"t":{
"$date":"2024-06-01T13:24:10.034+00:00"
},
"s":"I",
"c":"COMMAND",
"id":51803,
"ctx":"conn3",
"msg":"Slow query",
"attr":{
"type":"command",
"isFromUserConnection":true,
"ns":"db.coll",
"collectionType":"normal",
"appName":"MongoDB Shell",
"command":{
"find":"coll",
"filter":{
"b":-1
},
"sort":{
"splitPoint":1
},
"readConcern":{ },
"$db":"db"
},
"planSummary":"COLLSCAN",
"planningTimeMicros":87,
"keysExamined":0,
"docsExamined":20889,
"hasSortStage":true,
"nBatches":1,
"cursorExhausted":true,
"numYields":164,
"nreturned":99,
"planCacheShapeHash":"9C05019A",
"planCacheKey":"C41063D6",
"queryFramework":"classic",
"reslen":96,
"locks":{
"ReplicationStateTransition":{
"acquireCount":{
"w":3
}
},
"Global":{
"acquireCount":{
"r":327,
"w":1
}
},
"Database":{
"acquireCount":{
"r":1
},
"acquireWaitCount":{
"r":1
},
"timeAcquiringMicros":{
"r":2814
}
},
"Collection":{
"acquireCount":{
"w":1
}
}
},
"flowControl":{
"acquireCount":1,
"acquireWaitCount":1,
"timeAcquiringMicros":8387
},
"readConcern":{
"level":"local",
"provenance":"implicitDefault"
},
"storage":{ },
"cpuNanos":20987385,
"remote":"127.0.0.1:47150",
"protocol":"op_msg",
"queues":{
"ingress":{
"admissions":7,
"totalTimeQueuedMicros":0
},
"execution":{
"admissions":328,
"totalTimeQueuedMicros":2109
}
},
"workingMillis":89,
"durationMillis":101
}
}

Starting in MongoDB 8.0, the existing queryHash field is duplicated in a new field named planCacheShapeHash. If you're using an earlier MongoDB version, you'll only see the queryHash field. Future MongoDB versions will remove the deprecated queryHash field, and you'll need to use the planCacheShapeHash field instead.

Escaping

This example demonstrates character escaping, as shown in the setName field of the attribute object:

{
"t": {
"$date": "2020-05-20T19:11:09.268+00:00"
},
"s": "I",
"c": "REPL",
"id": 21752,
"ctx": "ReplCoord-0",
"svc": "R",
"msg": "Scheduling remote command request",
"attr": {
"context": "vote request",
"request": "RemoteCommand 229 -- target:localhost:27003 db:admin cmd:{ replSetRequestVotes: 1, setName: \"my-replica-name\", dryRun: true, term: 3, candidateIndex: 0, configVersion: 2, configTerm: 3, lastAppliedOpTime: { ts: Timestamp(1589915409, 1), t: 3 } }"
}
}

View

Starting in MongoDB 5.0, log messages for slow queries on views include a resolvedViews field that contains the view details:

"resolvedViews": [ {
"viewNamespace": <String>, // namespace and view name
"dependencyChain": <Array of strings>, // view name and collection
"resolvedPipeline": <Array of documents> // aggregation pipeline for view
} ]

The following example uses the test database and creates a view named myView that sorts the documents in myCollection by the firstName field:

use test
db.createView( "myView", "myCollection", [ { $sort: { "firstName" : 1 } } ] )

Assume a slow query is run on myView. The following example log message contains a resolvedViews field for myView:

{
"t": {
"$date": "2021-09-30T17:53:54.646+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn249",
"svc": "R",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "test.myView",
"appName": "MongoDB Shell",
"command": {
"find": "myView",
"filter": {},
"lsid": {
"id": { "$uuid": "ad176471-60e5-4e82-b977-156a9970d30f" }
},
"$db": "test"
},
"planSummary":"COLLSCAN",
"resolvedViews": [ {
"viewNamespace": "test.myView",
"dependencyChain": [ "myView", "myCollection" ],
"resolvedPipeline": [ { "$sort": { "firstName": 1 } } ]
} ],
"keysExamined": 0,
"docsExamined": 1,
"hasSortStage": true,
"cursorExhausted": true,
"numYields": 0,
"nreturned": 1,
"planCacheShapeHash": "3344645B",
"planCacheKey": "1D3DE690",
"queryFramework": "classic"
"reslen": 134,
"locks": { "ParallelBatchWriterMode": { "acquireCount": { "r": 1 } },
"ReplicationStateTransition": { "acquireCount": { "w": 1 } },
"Global": { "acquireCount": { "r": 4 } },
"Database": { "acquireCount": {"r": 1 } },
"Collection": { "acquireCount": { "r": 1 } },
"Mutex": { "acquireCount": { "r": 4 } } },
"storage": {},
"remote": "127.0.0.1:34868",
"protocol": "op_msg",
"workingMillis": 0,
"durationMillis": 0
}
}
}

Starting in MongoDB 8.0, the existing queryHash field is duplicated in a new field named planCacheShapeHash. If you're using an earlier MongoDB version, you'll only see the queryHash field. Future MongoDB versions will remove the deprecated queryHash field, and you'll need to use the planCacheShapeHash field instead.

Authorization

Starting in MongoDB 5.0, log messages for slow queries include a system.profile.authorization section. These metrics help determine if a request is delayed because of contention for the user authorization cache.

"authorization": {
"startedUserCacheAcquisitionAttempts": 1,
"completedUserCacheAcquisitionAttempts": 1,
"userCacheWaitTimeMicros": 508
},

Session Workflow Log Message

Starting in MongoDB 6.3, a message is added to the log if the time to send an operation response exceeds the slowms threshold option.

The message is known as a session workflow log message and contains various times to perform an operation in a database session.

Example session workflow log message:

{
"t": {
"$date": "2022-12-14T17:22:44.233+00:00"
},
"s": "I",
"c": "EXECUTOR",
"id": 6983000,
"ctx": "conn1",
"svc": "R",
"msg": "Slow network response send time",
"attr": {
"elapsed": {
"totalMillis": 109,
"activeMillis": 30,
"receiveWorkMillis": 2,
"processWorkMillis": 10,
"sendResponseMillis": 22,
"yieldMillis": 15,
"finalizeMillis": 30
}
}
}

The times are in milliseconds.

A session workflow message is added to the log if sendResponseMillis exceeds the slowms threshold option.

Field字段Description描述
totalMillisTotal time to perform the operation in the session, which includes the time spent waiting for a message to be received.
activeMillisTime between receiving a message and completing the operation associated with that message. Time includes sending a response and performing any clean up.
receivedWorkMillisTime to receive the operation information over the network.
processWorkMillisTime to process the operation and create the response.
sendResponseMillisTime to send the response.
yieldMillisTime between releasing the worker thread and the thread being used again.
finalizeTime to end and close the session workflow.

Connection Acquisition To Wire Log Message

Starting in MongoDB 6.3, a message is added to the log if the time that an operation waited between acquisition of a server connection and writing the bytes to send to the server over the network exceeds 1 millisecond.

By default, the message is logged at the "I" information level, and at most once every second to avoid too many log messages. If you must obtain every log message, change your log level to debug.

If the operation wait time exceeds 1 millisecond and the message is logged at the information level within the last second, then the next message is logged at the debug level. Otherwise, the next message is logged at the information level.

Example log message:

{
"t": {
"$date":"2023-01-31T15:22:29.473+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 6496702,
"ctx": "ReplicaSetMonitor-TaskExecutor",
"svc": "R",
"msg": "Acquired connection for remote operation and completed writing to wire",
"attr": {
"durationMicros": 1683
}
}

The following table describes the durationMicros field in attr.

Field字段Description描述
durationMicrosTime in microseconds that the operation waited between acquisition of a server connection and writing the bytes to send to the server over the network.

Cache Refresh Times

Note

Cache refresh log fields are specific to sharded clusters and only appear in logs generated by the mongos router. They are not available in unsharded replica sets or standalone deployments.

Starting in MongoDB 6.1, log messages for slow queries include the following cache refresh time fields:

  • catalogCacheDatabaseLookupDurationMillis
  • catalogCacheCollectionLookupDurationMillis
  • databaseVersionRefreshDurationMillis
  • shardVersionRefreshMillis

Starting in MongoDB 7.0, log messages for slow queries also include the catalogCacheIndexLookupDurationMillis field that indicates the time that the operation spent fetching information from the index cache. This release also renames the shardVersionRefreshMillis field to placementVersionRefreshMillis.

The following example includes:

  • catalogCacheDatabaseLookupDurationMillis
  • catalogCacheCollectionLookupDurationMillis
  • catalogCacheIndexLookupDurationMillis
{
"t": {
"$date": "2023-03-17T09:47:55.929+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn14",
"svc": "R",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "db.coll",
"appName": "MongoDB Shell",
"command": {
"insert": "coll",
"ordered": true,
"lsid": {
"id": {
"$uuid": "5d50b19c-8559-420a-a122-8834e012274a"
}
},
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1679046398,
"i": 8
}
},
"signature": {
"hash": {
"$binary": {
"base64": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",
"subType": "0"
}
},
"keyId": 0
}
},
"$db": "db"
},
"catalogCacheDatabaseLookupDurationMillis": 19,
"catalogCacheCollectionLookupDurationMillis": 68,
"catalogCacheIndexLookupDurationMillis": 16026,
"nShards": 1,
"ninserted": 1,
"numYields": 232,
"reslen": 96,
"readConcern": {
"level": "local",
"provenance": "implicitDefault",
},
"cpuNanos": 29640339,
"remote": "127.0.0.1:48510",
"protocol": "op_msg",
"remoteOpWaitMillis": 4078,
"workingMillis": 20334,
"durationMillis": 20334
}
}

Linux Syslog Limitations

In a Linux system, messages are subject to the rules defined in the Linux configuration file /etc/systemd/journald.conf. By default, log message bursts are limited to 1000 messages within a 30 second period. To see more messages, increase the RateLimitBurst parameter in /etc/systemd/journald.conf.

Download Your Logs下载日志

You can use MongoDB Atlas to download a zipped file containing the logs for a selected hostname or process in your database deployment. To learn more, see View and Download MongoDB Logs.您可以使用MongoDB Atlas下载一个压缩文件,其中包含数据库部署中选定主机名或进程的日志。要了解更多信息,请参阅查看和下载MongoDB日志