Log Messages记录消息

On this page本页内容

Overview概述

As part of normal operation, MongoDB maintains a running log of events, including entries such as incoming connections, commands run, and issues encountered. 作为正常操作的一部分,MongoDB维护事件的运行日志,包括输入连接、运行命令和遇到的问题等条目。Generally, log messages are useful for diagnosing issues, monitoring your deployment, and tuning performance.通常,日志消息对于诊断问题、监视部署和优化性能非常有用。

Structured Logging结构化测试

Starting in MongoDB 4.4, mongod / mongos instances output all log messages in structured JSON format. 从MongoDB 4.4开始,mongod/mongos实例以结构化JSON格式输出所有日志消息。Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as "severity", and each corresponding value records the associated logging information for that field type, such as "informational". 日志条目以一系列键值对的形式写入,其中每个键值表示日志消息字段类型,如“严重性”,每个对应的值记录该字段类型的相关日志信息,如“信息性”。Previously, log entries were output as plaintext.以前,日志条目是以明文形式输出的。

Example实例

The following is an example log message in JSON format as it would appear in the MongoDB log file:以下是一个JSON格式的日志消息示例,它将出现在MongoDB日志文件中:

{"t":{"$date":"2020-05-01T15:16:17.180+00:00"},"s":"I", "c":"NETWORK", "id":12345, "ctx":"listener", "msg":"Listening on","attr":{"address":"127.0.0.1"}}

JSON log entries can be pretty-printed for readability. 为了便于阅读,JSON日志条目可以美化打印Here is the same log entry pretty-printed:下面是同样的日志条目:

{
  "t": {
    "$date": "2020-05-01T15:16:17.180+00:00"
  },
  "s": "I",
  "c": "NETWORK",
  "id": 12345,
  "ctx": "listener",
  "msg": "Listening on",
  "attr": {
    "address": "127.0.0.1"
  }
}

In this log entry, for example, the key s, representing severity, has a corresponding value of I, representing "Informational", and the key c, representing component, has a corresponding value of NETWORK, indicating that the "network" component was responsible for this particular message. 例如,在该日志条目中,表示严重性的键s具有对应的值I,表示“信息”,而表示组件的键c具有对应的NETWORK值,表示“网络”组件负责该特定消息。The various field types are presented in detail in the Log Message Field Types section.日志消息字段类型部分详细介绍了各种字段类型。

Structured logging with key-value pairs allows for efficient parsing by automated tools or log ingestion services, and makes programmatic search and analysis of log messages easier to perform. 带有键值对的结构化日志记录允许通过自动化工具或日志摄取服务进行高效解析,并使日志消息的编程搜索和分析更容易执行。Examples of analyzing structured log messages can be found in the Parsing Structured Log Messages section.分析结构化日志消息的示例可以在分析结构化日志消息部分找到。

JSON Log Output FormatJSON日志输出格式

With MongoDB 4.4, all log output is now in JSON format. 在MongoDB 4.4中,所有日志输出现在都是JSON格式。This includes log output sent to the file, syslog, and stdout (standard out) log destinations, as well as the output of the getLog command.这包括发送到文件syslogstdout(标准输出)日志目的地的日志输出,以及getLog命令的输出。

Each log entry is output as a self-contained JSON object which follows the Relaxed Extended JSON v2.0 specification, and has the following layout and field order:每个日志条目输出为一个自包含的JSON对象,该对象遵循宽松的扩展JSON v2.0规范,并具有以下布局和字段顺序:

{
  "t": <Datetime>, // timestamp
  "s": <String>, // severity
  "c": <String>, // component
  "ctx": <String>, // context
  "id": <Integer>, // unique identifier
  "msg": <String>, // message body
  "attr": <Object> // additional attributes (optional)
  "tags": <Array of strings> // tags (optional)
  "truncated": <Object> // truncation info (if truncated)
  "size": <Integer> // original size of entry (if truncated)
}
  • Timestamp - Timestamp of the log message, in ISO-8601 format. 日志消息的时间戳,ISO-8601格式。See Timestamp.请参阅时间戳
  • Severity - String representing the short severity code of the log message. 表示日志消息的短严重性代码的字符串。See Severity.请参阅严重性
  • Component - String representing the full component string of the log message. 表示日志消息的完整组件字符串的字符串。See Components.请参阅组件
  • Context - String representing the name of the thread issuing the log statement.表示发出log语句的线程的名称的字符串。
  • id - Integer representing the unique identifier of the log statement. 表示日志语句唯一标识符的整数。See Filtering by Known Log ID for an example.有关示例,请参阅按已知日志ID进行筛选
  • Message - String representing the raw log output message as passed from the server or driver. 表示从服务器或驱动程序传递的原始日志输出消息的字符串。This message is escaped as needed according to the JSON specification.根据JSON规范,根据需要转义此消息。
  • Attributes - (optional)(可选) Object containing one or more key-value pairs for any additional attributes provided. 对象,该对象包含所提供的任何其他属性的一个或多个键值对。If a log message does not include any additional attributes, this object is omitted. 如果日志消息不包含任何附加属性,则忽略此对象。Attribute values may be referenced by their key name in the message body, depending on the message. 属性值可以在消息体中通过其键名引用,具体取决于消息Like message, attributes are escaped as needed according to the JSON specification.消息一样,根据JSON规范根据需要转义属性。
  • Tags - (optional)(可选) Array of strings representing any tags applicable to the log statement, for example: ["startupWarnings"].表示适用于日志语句的任何标记的字符串数组,例如:["startupWarnings"]
  • Truncated - (if truncated)(如果被截断) Object containing information regarding log message truncation, if applicable. This object will only be present if the log entry contains at least one attribute that was truncated.仅当日志项包含至少一个被截断的属性时,此对象才会出现。
  • Size - (if truncated)(如果被截断) Integer representing the original size of a log entry if it has been truncated. 整数,表示日志项(如果已被截断)的原始大小。This field will only be present if the log entry contains at least one attribute that was truncated.仅当日志项包含至少一个被截断的属性时,此字段才会出现。

Escaping转义

The message and attributes fields will escape control characters as necessary according to the Relaxed Extended JSON v2.0 specification:根据放宽的扩展JSON v2.0规范,消息属性字段将根据需要转义控制字符:

Character Represented代表人物Escape Sequence逃逸序列
Quotation Mark (")\"
Backslash (\)\\
Backspace (0x08)\b
Formfeed (0x0C)\f
Newline (0x0A)\n
Carriage return (0x0D)\r
Horizontal tab (0x09)\t

Control characters not listed above are escaped with \uXXXX where "XXXX" is the unicode codepoint in hexadecimal. 上面未列出的控制字符用\uxxx转义,其中“XXXX”是十六进制的unicode代码点。Bytes with invalid UTF-8 encoding are replaced with the unicode replacement character represented by \ufffd.UTF-8编码无效的字节将替换为\ufffd表示的unicode替换字符。

An example of message escaping is provided in the examples section.示例部分提供了消息转义的示例。

Truncation截断

Any attributes that exceed the maximum size defined with maxLogSizeKB (default: 10 KB) are truncated. 任何超过maxLogSizeKB(默认值:10 KB)定义的最大大小的属性都会被截断。Truncated attributes omit log data beyond the configured limit, but retain the JSON formatting of the entry to ensure that the entry remains parsable.截断属性会忽略超出配置限制的日志数据,但会保留条目的JSON格式,以确保条目保持可解析性。

Here is an example of a log entry with a truncated attribute:以下是具有截断属性的日志条目的示例:

{"t":{"$date":"2020-05-19T18:12:05.702+00:00"},"s":"I",  "c":"SHARDING", "id":22104,   "ctx":"conn33",
"msg":"Received splitChunk request","attr":{"request":{"splitChunk":"config.system.sessions",
"from":"production-shard1","keyPattern":{"_id":1},"epoch":{"$oid":"5ec42172996456771753a59e"},
"shardVersion":[{"$timestamp":{"t":1,"i":0}},{"$oid":"5ec42172996456771753a59e"}],"min":{"_id":{"$minKey":1}},
"max":{"_id":{"$maxKey":1}},"splitKeys":[{"_id":{"id":{"$uuid":"00400000-0000-0000-0000-000000000000"}}},
{"_id":{"id":{"$uuid":"00800000-0000-0000-0000-000000000000"}}},
...
{"_id":{"id":{"$uuid":"26c00000-0000-0000-0000-000000000000"}}},{"_id":{}}]}},
"truncated":{"request":{"splitKeys":{"155":{"_id":{"id":{"type":"binData","size":21}}}}}},
"size":{"request":46328}}

In this case, the request attribute has been truncated and the specific instance of its subfield _id that triggered truncation (i.e. caused the attribute to overrun maxLogSizeKB) is printed without data as {"_id":{}}. 在这种情况下,request属性已被截断,触发截断的子字段_id的特定实例(即导致属性超出maxLogSizeKB)被打印为{"_id":{}}The remainder of the request attribute is then omitted.然后省略request属性的其余部分。

Log entries containing one or more truncated attributes include a truncated object which provides the following information for each truncated attribute in the log entry:包含一个或多个截断属性的日志项包括一个truncated对象,该对象为日志项中的每个截断属性提供以下信息:

  • the attribute that was truncated被截断的属性
  • the specific subobject of that attribute that triggered truncation, if applicable.触发截断的该属性的特定子对象(如果适用)。
  • the data type of the truncated field截断字段的数据type
  • the size of the truncated field截断字段的size

Log entries with truncated attributes may also include an additional size field at the end of the entry which indicates the original size of the attribute before truncation, in this case 46328 or about 46KB. 具有截断属性的日志条目还可以在条目末尾包括额外的size字段,该字段指示截断之前属性的原始大小,在这种情况下为46328或约46KB。This final size field is only shown if it is different from the size field in the truncated object, i.e. if the total object size of the attribute is different from the size of the truncated subobject, as is the case in the example above.此最终size字段仅在与截断对象中的size字段不同时显示,即,如果属性的对象总大小与截断子对象的大小不同,如上例所示。

Padding衬补

When output to the file or the syslog log destinations, padding is added after the severity, context, and id fields to increase readability when viewed with a fixed-width font.当输出到文件syslog日志目的地时,在severitycontextid字段后添加填充,以提高使用固定宽度字体查看时的可读性。

The following MongoDB log file excerpt demonstrates this padding:以下MongoDB日志文件摘录演示了这种填充:

{"t":{"$date":"2020-05-18T20:18:12.724+00:00"},"s":"I",  "c":"CONTROL",  "id":23285,   "ctx":"main","msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"W",  "c":"ASIO",
     "id":22601,   "ctx":"main","msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"I",  "c":"NETWORK",  "id":4648601, "ctx":"main","msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I",  "c":"STORAGE",  "id":4615611, "ctx":"initandlisten","msg":"MongoDB starting","attr":{"pid":10111,"port":27001,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"centos8"}}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I",  "c":"CONTROL",  "id":23403,   "ctx":"initandlisten","msg":"Build Info","attr":{"buildInfo":{"version":"4.4.0","gitVersion":"328c35e4b883540675fb4b626c53a08f74e43cf0","openSSLVersion":"OpenSSL 1.1.1c FIPS  28 May 2019","modules":[],"allocator":"tcmalloc","environment":{"distmod":"rhel80","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I",  "c":"CONTROL",  "id":51765,   "ctx":"initandlisten","msg":"Operating System","attr":{"os":{"name":"CentOS Linux release 8.0.1905 (Core) ","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}

Pretty Printing漂亮的印刷品

When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.在使用MongoDB结构化日志记录时,第三方jq命令行实用程序是一个有用的工具,它允许轻松漂亮地打印日志条目,以及强大的基于关键字的匹配和筛选。

jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.是一个开源JSON解析器,可用于Linux、Windows和macOS。

You can use jq to pretty-print log entries as follows:您可以使用jq漂亮地打印日志条目,如下所示:

  • Pretty-print the entire log file:打印整个日志文件:

    cat mongod.log | jq
  • Pretty-print the most recent log entry:打印最近的日志条目:

    cat mongod.log | tail -1 | jq

More examples of working with MongoDB structured logs are available in the Parsing Structured Log Messages section.解析结构化日志消息部分提供了更多使用MongoDB结构化日志的示例。

Configuring Log Message Destinations配置日志消息目的地

MongoDB log messages can be output to file, syslog, or stdout(standard output).MongoDB日志消息可以输出到filesyslogstdout(标准输出)。

To configure the log output destination, use one of the following settings, either in the configuration file or on the command-line:要配置日志输出目标,请在配置文件或命令行中使用以下设置之一:

Configuration file:配置文件:
Command-line:命令行:

Not specifying either file or syslog sends all logging output to stdout.不指定filesyslog会将所有日志输出发送到stdout

For the full list of logging settings and options see:有关日志设置和选项的完整列表,请参阅:

Configuration file:
Command-line:命令行:
Note注意

Error messages sent to stderr (standard error), such as fatal errors during startup when not using the file or syslog log destinations, or messages having to do with misconfigured logging settings, are not affected by the log output destination setting, and are printed to stderr in plaintext format.

Log Message Field Types日志消息字段类型

Timestamp时间戳

The timestamp field type indicates the precise date and time at which the logged event occurred.时间戳字段类型指示记录的事件发生的确切日期和时间。

{
  "t": {
    "$date": "2020-05-01T15:16:17.180+00:00"
  },
  "s": "I",
  "c": "NETWORK",
  "id": 12345,
  "ctx": "listener",
  "msg": "Listening on",
  "attr": {
    "address": "127.0.0.1"
  }
}

When logging to file or to syslog [1], the default format for the timestamp is iso8601-local. To modify the timestamp format, use the --timeStampFormat runtime option or the systemLog.timeStampFormat setting.要修改时间戳格式,请使用--timeStampFormat运行时选项或systemLog.timeStampFormat设置。

See Filtering by Date Range for log parsing examples that filter on the timestamp field.有关在时间戳字段上进行筛选的日志解析示例,请参阅按日期范围筛选

Note注意

Starting in MongoDB 4.4, the ctime timestamp format is no longer supported.从MongoDB 4.4开始,不再支持ctime时间戳格式。

[1] If logging to syslog, the syslog daemon generates timestamps when it logs a message, not when MongoDB issues the message. This can lead to misleading timestamps for log entries, especially when the system is under heavy load.

Severity

The severity field type indicates the severity level associated with the logged event.严重性字段类型指示与记录的事件相关的严重性级别。

{
  "t": {
    "$date": "2020-05-01T15:16:17.180+00:00"
  },
  "s": "I",
  "c": "NETWORK",
  "id": 12345,
  "ctx": "listener",
  "msg": "Listening on",
  "attr": {
    "address": "127.0.0.1"
  }
}

Severity levels range from "Fatal" (most severe) to "Debug" (least severe):严重性级别从“致命”(最严重)到“调试”(最不严重)不等:

Level级别Description描述
FFatal致命的
EError错误
WWarning警告
IInformational, for verbosity level 0信息性,用于详细级别0
D1 - D5

Debug, for verbosity levels > 0调试,用于详细级别>0

Starting in version 4.2, MongoDB indicates the specific debug verbosity level. 从版本4.2开始,MongoDB指示特定的调试详细级别For example, if verbosity level is 2, MongoDB indicates D2.例如,如果详细级别为2,则MongoDB表示D2

In previous versions, MongoDB log messages specified D for all debug verbosity levels.在以前的版本中,MongoDB日志消息为所有调试详细级别指定了D

You can specify the verbosity level of various components to determine the amount of Informational and Debug messages MongoDB outputs. 您可以指定各种组件的详细级别,以确定MongoDB输出的信息调试消息的数量。Severity categories above these levels are always shown. 始终显示高于这些级别的严重性类别。[2] To set verbosity levels, see Configure Log Verbosity Levels.

Components组件

The component field type indicates the category a logged event is a member of, such as NETWORK or COMMAND.组件字段类型表示记录的事件所属的类别,如网络命令

{
  "t": {
    "$date": "2020-05-01T15:16:17.180+00:00"
  },
  "s": "I",
  "c": "NETWORK",
  "id": 12345,
  "ctx": "listener",
  "msg": "Listening on",
  "attr": {
    "address": "127.0.0.1"
  }
}

Each component is individually configurable via its own verbosity filter. 每个组件都可以通过自己的详细度筛选器单独配置。The available components are as follows:可用组件如下:

ACCESS

Messages related to access control, such as authentication. 与访问控制相关的消息,例如身份验证。To specify the log level for ACCESS components, use the systemLog.component.accessControl.verbosity setting.

COMMAND

Messages related to database commands, such as count. To specify the log level for COMMAND components, use the systemLog.component.command.verbosity setting.

CONTROL

Messages related to control activities, such as initialization. 与控制活动(如初始化)相关的消息。To specify the log level for CONTROL components, use the systemLog.component.control.verbosity setting.

ELECTION

Messages related specifically to replica set elections. 与副本集选举特别相关的消息。To specify the log level for ELECTION components, set the systemLog.component.replication.election.verbosity parameter.

REPL is the parent component of ELECTION. If systemLog.component.replication.election.verbosity is unset, MongoDB uses the REPL verbosity level for ELECTION components.

FTDC

Messages related to the diagnostic data collection mechanism, such as server statistics and status messages. 与诊断数据集合机制相关的消息,例如服务器统计信息和状态消息。To specify the log level for FTDC components, use the systemLog.component.ftdc.verbosity setting.

GEO

Messages related to the parsing of geospatial shapes, such as verifying the GeoJSON shapes. 与地理空间形状解析相关的消息,例如验证GeoJSON形状。To specify the log level for GEO components, set the systemLog.component.geo.verbosity parameter.要指定GEO组件的日志级别,请设置systemLog.component.geo.verbosity参数。

INDEX

Messages related to indexing operations, such as creating indexes. 与索引操作(如创建索引)相关的消息。To specify the log level for INDEX components, set the systemLog.component.index.verbosity parameter.

INITSYNC

Messages related to initial sync operation. 与初始同步操作相关的消息。To specify the log level for INITSYNC components, set the systemLog.component.replication.initialSync.verbosity parameter.

REPL is the parent component of INITSYNC. If systemLog.component.replication.initialSync.verbosity is unset, MongoDB uses the REPL verbosity level for INITSYNC components.

JOURNAL

Messages related specifically to storage journaling activities. 专门与存储日志记录活动相关的消息。To specify the log level for JOURNAL components, use the systemLog.component.storage.journal.verbosity setting.要指定JOURNAL组件的日志级别,请使用systemLog.component.storage.journal.verbosity设置。

STORAGE is the parent component of JOURNAL. If systemLog.component.storage.journal.verbosity is unset, MongoDB uses the STORAGE verbosity level for JOURNAL components.

NETWORK

Messages related to network activities, such as accepting connections. 与网络活动相关的消息,例如接受连接。To specify the log level for NETWORK components, set the systemLog.component.network.verbosity parameter.

QUERY

Messages related to queries, including query planner activities. 与查询相关的消息,包括查询计划器活动。To specify the log level for QUERY components, set the systemLog.component.query.verbosity parameter.

RECOVERY

Messages related to storage recovery activities. 与存储恢复活动相关的消息。To specify the log level for RECOVERY components, use the systemLog.component.storage.recovery.verbosity setting.

STORAGE is the parent component of RECOVERY. If systemLog.component.storage.recovery.verbosity is unset, MongoDB uses the STORAGE verbosity level for RECOVERY components.

REPL

Messages related to replica sets, such as initial sync, heartbeats, steady state replication, and rollback. 与副本集相关的消息,例如初始同步、心跳、稳态复制和回滚。[2] To specify the log level for REPL components, set the systemLog.component.replication.verbosity parameter.

REPL is the parent component of the ELECTION, INITSYNC, REPL_HB, and ROLLBACK components.

REPL_HB

Messages related specifically to replica set heartbeats. 与复制集心跳特别相关的消息。To specify the log level for REPL_HB components, set the systemLog.component.replication.heartbeats.verbosity parameter.要指定REPL_HB组件的日志级别,请设置systemLog.component.replication.heartbeats.verbosity参数。

REPL is the parent component of REPL_HB. If systemLog.component.replication.heartbeats.verbosity is unset, MongoDB uses the REPL verbosity level for REPL_HB components.

ROLLBACK

Messages related to rollback operations. To specify the log level for ROLLBACK components, set the systemLog.component.replication.rollback.verbosity parameter.

REPL is the parent component of ROLLBACK. If systemLog.component.replication.rollback.verbosity is unset, MongoDB uses the REPL verbosity level for ROLLBACK components.

SHARDING

Messages related to sharding activities, such as the startup of the mongos. 与分片活动相关的消息,如mongos的启动。To specify the log level for SHARDING components, use the systemLog.component.sharding.verbosity setting.

STORAGE

Messages related to storage activities, such as processes involved in the fsync command. To specify the log level for STORAGE components, use the systemLog.component.storage.verbosity setting.

STORAGE is the parent component of JOURNAL and RECOVERY.

TXN

New in version 4.0.2.在版本4.0.2中新增

Messages related to multi-document transactions. To specify the log level for TXN components, use the systemLog.component.transaction.verbosity setting.

WRITE

Messages related to write operations, such as update commands. 与写操作相关的消息,如update新命令。To specify the log level for WRITE components, use the systemLog.component.write.verbosity setting.要指定WRITE组件的日志级别,请使用systemLog.component.write.verbosity设置。

WT

New in version 5.3.在版本5.3中新增

Messages related to the WiredTiger storage engine. WiredTiger存储引擎相关的消息。To specify the log level for WT components, use the systemLog.component.storage.wt.verbosity setting.要指定WT组件的日志级别,请使用systemLog.component.storage.wt.verbosity设置。

WTBACKUP

New in version 5.3.在版本5.3中新增

Messages related to backup operations performed by the WiredTiger storage engine. To specify the log level for the WTBACKUP components, use the systemLog.component.storage.wt.wtBackup.verbosity setting.

WTCHKPT

New in version 5.3.在版本5.3中新增

Messages related to checkpoint operations performed by the WiredTiger storage engine. To specify the log level for WTCHKPT components, use the systemLog.component.storage.wt.wtCheckpoint.verbosity setting.

WTCMPCT

New in version 5.3.在版本5.3中新增

Messages related to compaction operations performed by the WiredTiger storage engine. WiredTiger存储引擎执行的压缩操作相关的消息。To specify the log level for WTCMPCT components, use the systemLog.component.storage.wt.wtCompact.verbosity setting.

WTEVICT

New in version 5.3.在版本5.3中新增

Messages related to eviction operations performed by the WiredTiger storage engine. WiredTiger存储引擎执行的逐出操作相关的消息。To specify the log level for WTEVICT components, use the systemLog.component.storage.wt.wtEviction.verbosity setting.

WTHS

New in version 5.3.在版本5.3中新增

Messages related to the history store of the WiredTiger storage engine. WiredTiger存储引擎的历史存储相关的消息。To specify the log level for WTHS components, use the systemLog.component.storage.wt.wtHS.verbosity setting.要指定WTHS组件的日志级别,请使用systemLog.component.storage.wt.wtHS.verbosity设置。

WTRECOV

New in version 5.3.在版本5.3中新增

Messages related to recovery operations performed by the WiredTiger storage engine. To specify the log level for WTRECOV components, use the systemLog.component.storage.wt.wtRecovery.verbosity setting.

WTRTS

New in version 5.3.在版本5.3中新增

Messages related to rollback to stable (RTS) operations performed by the WiredTiger storage engine. WiredTiger存储引擎执行的回滚到稳定(RTS)操作相关的消息。To specify the log level for WTRTS components, use the systemLog.component.storage.wt.wtRTS.verbosity setting.要指定WTRTS组件的日志级别,请使用WTRTS components, use the systemLog.component.storage.wt.wtRTS.verbosity设置。

WTSLVG

New in version 5.3.在版本5.3中新增

Messages related to salvage operations performed by the WiredTiger storage engine. WiredTiger存储引擎执行的打捞操作相关的消息。To specify the log level for WTSLVG components, use the systemLog.component.storage.wt.wtSalvage.verbosity setting.要指定WTSLVG组件的日志级别,请使用systemLog.component.storage.wt.wtSalvage.verbosity设置。

WTTIER

New in version 5.3.在版本5.3中新增

Messages related to tiered storage operations performed by the WiredTiger storage engine. To specify the log level for WTTIER components, use the systemLog.component.storage.wt.wtTiered.verbosity setting.

WTTS

New in version 5.3.在版本5.3中新增

Messages related to timestamps used by the WiredTiger storage engine. To specify the log level for WTTS components, use the systemLog.component.storage.wt.wtTimestamp.verbosity setting.

WTTXN

New in version 5.3.在版本5.3中新增

Messages related to transactions performed by the WiredTiger storage engine. To specify the log level for WTTXN components, use the systemLog.component.storage.wt.wtTransaction.verbosity setting.

WTVRFY

New in version 5.3.在版本5.3中新增

Messages related to verification operations performed by the WiredTiger storage engine. To specify the log level for WTVRFY components, use the systemLog.component.storage.wt.wtVerify.verbosity setting.

WTWRTLOG

New in version 5.3.在版本5.3中新增

Messages related to log write operations performed by the WiredTiger storage engine. To specify the log level for WTWRTLOG components, use the systemLog.component.storage.wt.wtWriteLog.verbosity setting.

-

Messages not associated with a named component. 与命名组件不关联的消息。Unnamed components have the default log level specified in the systemLog.verbosity setting. 未命名组件具有在systemLog.verbosity设置中指定的默认日志级别。The systemLog.verbosity setting is the default setting for both named and unnamed components.systemLog.verbosity设置是命名组件和未命名组件的默认设置。

See Filtering by Component for log parsing examples that filter on the component field.有关在组件字段上进行筛选的日志解析示例,请参阅按组件筛选

Client Data客户端数据

MongoDB drivers and client applications (including mongosh) have the ability to send identifying information at the time of connection to the server. After the connection is established, the client does not send the identifying information again unless the connection is dropped and reestablished.在建立连接之后,客户端不再次发送标识信息,除非连接被丢弃并重新建立。

This identifying information is contained in the attributesfield of the log entry. 该标识信息包含在日志条目的attributes字段中。The exact information included varies by client.所包含的确切信息因客户而异。

Below is a sample log message containing the client data document as transmitted from a mongosh connection. 下面是包含从mongosh连接传输的客户端数据文档的示例日志消息。The client data is contained in the doc object in the attributes field:客户端数据包含在attributes字段中的doc对象中:

{"t":{"$date":"2020-05-20T16:21:31.561+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn202","msg":"client metadata","attr":{"remote":"127.0.0.1:37106","client":"conn202","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}

When secondary members of a replica set initiate a connection to a primary, they send similar data. 副本集的辅助成员启动到主成员的连接时,它们会发送类似的数据。A sample log message containing this initiation connection might appear as follows. 包含此启动连接的示例日志消息可能如下所示。The client data is contained in the doc object in the attributesfield:客户端数据包含在attributes字段中的doc对象中:

{"t":{"$date":"2020-05-20T16:33:40.595+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn214","msg":"client metadata","attr":{"remote":"127.0.0.1:37176","client":"conn214","doc":{"driver":{"name":"NetworkInterfaceTL","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}

See the examples section for a pretty-printed example showing client data.有关显示客户端数据的漂亮打印示例,请参阅示例部分

For a complete description of client information and required fields, see the MongoDB Handshake specification.有关客户端信息和必填字段的完整描述,请参阅MongoDB握手规范

Verbosity Levels详细程度

You can specify the logging verbosity level to increase or decrease the the amount of log messages MongoDB outputs. 您可以指定日志详细级别以增加或减少MongoDB输出的日志消息量。Verbosity levels can be adjusted for all components together, or for specific named components individually.可以为所有组件一起调整详细级别,也可以为特定的命名组件单独调整详细级别。

Verbosity affects log entries in the severity categories Informational and Debug only. 详细性仅影响严重性类别中的日志条目信息调试Severity categories above these levels are always shown.始终显示高于这些级别的严重性类别。

You might set verbosity levels to a high value to show detailed logging for debugging or development, or to a low value to minimize writes to the log on a vetted production deployment. 您可以将详细级别设置为高值以显示调试或开发的详细日志记录,或将其设置为低值以最小化对经过审核的生产部署上日志的写入。[2]

View Current Log Verbosity Level查看当前日志详细级别

To view the current verbosity levels, use the db.getLogComponents() method:要查看当前详细级别,请使用db.getLogComponents()方法:

db.getLogComponents()

Your output might resemble the following:您的输出可能类似于以下内容:

{
 "verbosity" : 0,
 "accessControl" : {
    "verbosity" : -1
 },
 "command" : {
    "verbosity" : -1
 },
 ...
 "storage" : {
    "verbosity" : -1,
    "recovery" : {
       "verbosity" : -1
    },
    "journal" : {
        "verbosity" : -1
    }
 },
 ...

The initial verbosity entry is the parent verbosity level for all components, while the individual named components that follow, such as accessControl, indicate the specific verbosity level for that component, overriding the global verbosity level for that particular component if set.初始verbosity条目是所有组件的父详细级别,而后面的单个命名组件(如accessControl)指示该组件的特定详细级别,如果设置,则覆盖该特定组件的全局详细级别。

A value of -1, indicates that the component inherits the verbosity level of their parent, if they have one (as with recovery above, inheriting from storage), or the global verbosity level if they do not (as with command). -1表示组件继承其父级的详细级别(如果有)(如上面的recovery,从storage继承),或全局详细级别(如command)。Inheritance relationships for verbosity levels are indicated in the components section.详细级别的继承关系在“组件”部分中指明。

Configure Log Verbosity Levels配置日志详细级别

You can configure the verbosity level using: the systemLog.verbosity and systemLog.component.<name>.verbosity settings, the logComponentVerbosity parameter, or the db.setLogLevel() method. 您可以使用:systemLog.verbositysystemLog.component.<name>.verbosity设置、logComponentVerbosity参数或db.setLogLevel()方法。[2]

systemLog Verbosity Settings详细设置

To configure the default log level for all components, use the systemLog.verbosity setting. 要为所有组件配置默认日志级别,请使用systemLog.verbosity设置。To configure the level of specific components, use the systemLog.component.<name>.verbosity settings.要配置特定组件的级别,请使用systemLog.component.<name>.verbosity详细设置。

For example, the following configuration sets the systemLog.verbosity to 1, the systemLog.component.query.verbosity to 2, the systemLog.component.storage.verbosity to 2, and the systemLog.component.storage.journal.verbosity to 1:

systemLog:
   verbosity: 1
   component:
      query:
         verbosity: 2
      storage:
         verbosity: 2
         journal:
            verbosity: 1

You would set these values in the configuration file or on the command line for your mongod or mongos instance.您可以在配置文件或命令行中为mongodmongos实例设置这些值。

All components not specified explicitly in the configuration have a verbosity level of -1, indicating that they inherit the verbosity level of their parent, if they have one, or the global verbosity level (systemLog.verbosity) if they do not.配置中未显式指定的所有组件的详细级别均为-1,表示它们继承其父级的详细级别(如果有),或者继承全局详细级别(systemLog.verbosity)。

logComponentVerbosity Parameter参数

To set the logComponentVerbosity parameter, pass a document with the verbosity settings to change.要设置logComponentVerbosity参数,请传递具有要更改的详细设置的文档。

For example, the following sets the default verbosity level to 1, the query to 2, the storage to 2, and the storage.journal to 1.

db.adminCommand( {
   setParameter: 1,
   logComponentVerbosity: {
      verbosity: 1,
      query: {
         verbosity: 2
      },
      storage: {
         verbosity: 2,
         journal: {
            verbosity: 1
         }
      }
   }
} )

You would set these values from mongosh.您可以从mongosh设置这些值。

db.setLogLevel()

Use the db.setLogLevel() method to update a single component log level. 使用db.setLogLevel()方法更新单个组件日志级别。For a component, you can specify verbosity level of 0 to 5, or you can specify -1 to inherit the verbosity of the parent. 对于组件,可以指定05的详细级别,也可以指定-1继承父级的详细级别。For example, the following sets the systemLog.component.query.verbosity to its parent verbosity (i.e. default verbosity):例如,下面systemLog.component.query.verbosity设置为其父级详细信息(即默认详细信息):

db.setLogLevel(-1, "query")

You would set this value from mongosh.您可以从mongosh设置此值。

[2](1, 2, 3, 4, 5) Starting in version 4.2 (also available starting in 4.0.6), secondary members of a replica set now log oplog entries that take longer than the slow operation threshold to apply. These slow oplog messages:
  • Are logged for the secondaries in the diagnostic log.
  • Are logged under the REPL component with the text applied op: <oplog entry> took <num>ms.
  • Do not depend on the log levels (either at the system or component level)不依赖于日志级别(系统或组件级别)
  • Do not depend on the profiling level.不要依赖于分析级别。
  • May be affected by slowOpSampleRate, depending on your MongoDB version:
    • In MongoDB 4.2 and earlier, these slow oplog entries are not affected by the slowOpSampleRate. MongoDB logs all slow oplog entries regardless of the sample rate.MongoDB记录所有慢速oplog条目,无论采样率如何。
    • In MongoDB 4.4 and later, these slow oplog entries are affected by the slowOpSampleRate.在MongoDB 4.4及更高版本中,这些慢速oplog条目受slowOpSampleRate的影响。
The profiler does not capture slow oplog entries.探查器不捕获慢速oplog条目。

Logging Slow Operations日志记录慢操作

Client operations (such as queries) appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher. [2] These log entries include the full command object associated with the operation.

Starting in MongoDB 4.2, the profiler entries and the diagnostic log messages (i.e. mongod/mongos log messages) for read/write operations include:

  • queryHash to help identify slow queries with the same query shape.
  • planCacheKey to provide more insight into the query plan cache for slow queries.

Starting in MongoDB 5.0, slow operation log messages include a remote field specifying client IP address.从MongoDB 5.0开始,慢速操作日志消息包括指定客户端IP地址的远程字段。

The following example output includes information about a slow aggregation operation:以下示例输出包括有关慢速聚合操作的信息:

{"t":{"$date":"2020-05-20T20:10:08.731+00:00"},"s":"I",  "c":"COMMAND",  "id":51803,   "ctx":"conn281","msg":"Slow query","attr":{"type":"command","ns":"stocks.trades","appName":"MongoDB Shell","command":{"aggregate":"trades","pipeline":[{"$project":{"ticker":1.0,"price":1.0,"priceGTE110":{"$gte":["$price",110.0]},"_id":0.0}},{"$sort":{"price":-1.0}}],"allowDiskUse":true,"cursor":{},"lsid":{"id":{"$uuid":"fa658f9e-9cd6-42d4-b1c8-c9160fabf2a2"}},"$clusterTime":{"clusterTime":{"$timestamp":{"t":1590005405,"i":1}},"signature":{"hash":{"$binary":{"base64":"AAAAAAAAAAAAAAAAAAAAAAAAAAA=","subType":"0"}},"keyId":0}},"$db":"test"},"planSummary":"COLLSCAN","cursorid":1912190691485054730,"keysExamined":0,"docsExamined":1000001,"hasSortStage":true,"usedDisk":true,"numYields":1002,"nreturned":101,"reslen":17738,"locks":{"ReplicationStateTransition":{"acquireCount":{"w":1119}},"Global":{"acquireCount":{"r":1119}},"Database":{"acquireCount":{"r":1119}},"Collection":{"acquireCount":{"r":1119}},"Mutex":{"acquireCount":{"r":117}}},"storage":{"data":{"bytesRead":232899899,"timeReadingMicros":186017},"timeWaitingMicros":{"cache":849}},"remote": "192.168.14.15:37666","protocol":"op_msg","durationMillis":22427}}

See the examples section for a pretty-printed version of this log entry.

Time Waiting for Shards Logged in remoteOpWaitMillis Field等待分片登录remoteOpWaitMillis字段的时间

New in version 5.0.在版本5.0中新增

Starting in MongoDB 5.0, you can use the remoteOpWaitMillis log field to obtain the wait time (in milliseconds) for results from shards.从MongoDB 5.0开始,您可以使用remoteOpWaitMillis日志字段获取分片结果的等待时间(以毫秒为单位)。

remoteOpWaitMillis is only logged:

To determine if a merge operation or a shard issue is causing a slow query, compare the durationMillis and remoteOpWaitMillis time fields in the log. 要确定合并操作或分片问题是否导致查询速度慢,请比较日志中的durationMillisremoteOpWaitMillis时间字段。durationMillis is the total time the query took to complete. 查询完成所需的总时间。Specifically:明确地:

  • If durationMillis is slightly longer than remoteOpWaitMillis, then most of the time was spent waiting for a shard response. For example, durationMillis of 17 and remoteOpWaitMillis of 15.
  • If durationMillis is significantly longer than remoteOpWaitMillis, then most of the time was spent performing the merge. For example, durationMillis of 100 and remoteOpWaitMillis of 15.

Parsing Structured Log Messages解析结构化日志消息

Log parsing is the act of programmatically searching through and analyzing log files, often in an automated manner. 日志解析是以编程方式搜索和分析日志文件的行为,通常是以自动化的方式。With the introduction of structured logging in MongoDB 4.4, log parsing is made simpler and more powerful. 随着MongoDB 4.4中结构化日志的引入,日志解析变得更简单、更强大。For example:例如:

  • Log message fields are presented as key-value pairs. 日志消息字段显示为键值对。Log parsers can query by specific keys of interest to efficiently filter results.日志解析器可以按感兴趣的特定键进行查询,以有效地筛选结果。
  • Log messages always contain the same message structure. 日志消息始终包含相同的消息结构。Log parsers can reliably extract information from any log message, without needing to code for cases where information is missing or formatted differently.日志解析器可以可靠地从任何日志消息中提取信息,而无需对信息丢失或格式不同的情况进行编码。

The following examples demonstrate common log parsing workflows when working with MongoDB JSON log output.以下示例演示了使用MongoDB JSON日志输出时常见的日志解析工作流。

Log Parsing Examples日志解析示例

When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.在使用MongoDB结构化日志记录时,第三方jq命令行实用程序是一个有用的工具,它允许轻松漂亮地打印日志条目,以及强大的基于关键字的匹配和筛选。

jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.是一个开源JSON解析器,可用于Linux、Windows和macOS。

These examples use jq to simplify log parsing.这些示例使用jq简化日志解析。

Counting Unique Messages计数唯一消息

The following example shows the top 10 unique message values in a given log file, sorted by frequency:以下示例显示了给定日志文件中按频率排序的前10个唯一消息值:

jq -r ".msg" /var/log/mongodb/mongod.log | sort | uniq -c | sort -rn | head -10

Monitoring Connections监控连接

Remote client connections are shown in the log under the "remote" key in the attribute object. 远程客户端连接显示在属性对象的“远程”键下的日志中。The following counts all unique connections over the course of the log file and presents them in descending order by number of occurrences:以下内容统计日志文件过程中的所有唯一连接,并按出现次数降序显示:

jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | sort | uniq -c | sort -r

Note that connections from the same IP address, but connecting over different ports, are treated as different connections by this command. 请注意,此命令将来自相同IP地址但通过不同端口连接的连接视为不同的连接。You could limit output to consider IP addresses only, with the following change:您可以将输出限制为仅考虑IP地址,并进行以下更改:

jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | awk -F':' '{print $1}' | sort | uniq -c | sort -r

Analyzing Driver Connections分析驱动程序连接

The following example counts all remote MongoDB driver connections, and presents each driver type and version in descending order by number of occurrences:以下示例统计所有远程MongoDB驱动程序连接,并按出现次数降序显示每个驱动程序类型和版本:

jq -cr '.attr.doc.driver' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn

Analyzing Client Types分析客户端类型

The following example analyzes the reported client data of remote MongoDB driver connections and client applications, including mongosh, and prints a total for each unique operating system type that connected, sorted by frequency:

jq -r '.attr.doc.os.type' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn

The string "Darwin", as reported in this log field, represents a macOS client.此日志字段中报告的字符串“Darwin”表示macOS客户端。

Analyzing Slow Queries分析慢速查询

With slow operation logging enabled, the following returns only the slow operations that took above 2000 milliseconds:, for further analysis:启用慢速操作日志记录后,以下仅返回花费2000毫秒以上的慢速操作:,以供进一步分析:

jq '. | select(.attr.durationMillis>=2000)' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.有关本示例中显示的jq筛选器的更多信息,请参阅jq文档

Filtering by Component按组件筛选

Log components (the third field in the JSON log output format) indicate the general category a given log message falls under. 日志组件(JSON日志输出格式中的第三个字段)表示给定日志消息所属的一般类别。Filtering by component is often a great starting place when parsing log messages for relevant events.在解析相关事件的日志消息时,按组件筛选通常是一个很好的起点。

The following example prints only the log messages of component type REPL:以下示例仅打印组件类型REPL的日志消息:

jq '. | select(.c=="REPL")' /var/log/mongodb/mongod.log

The following example prints all log messages except those of component type REPL:以下示例打印除组件类型REPL之外的所有日志消息:

jq '. | select(.c!="REPL")' /var/log/mongodb/mongod.log

The following example print log messages of component type REPL orSTORAGE:

jq '. | select( .c as $c | ["REPL", "STORAGE"] | index($c) )' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.有关本示例中显示的jq筛选器的更多信息,请参阅jq文档

Filtering by Known Log ID按已知日志ID筛选

Log IDs (the fifth field in the JSON log output format) map to specific log events, and can be relied upon to remain stable over successive MongoDB releases.日志ID(JSON日志输出格式中的第五个字段)映射到特定的日志事件,可以依赖它在后续MongoDB版本中保持稳定。

As an example, you might be interested in the following two log events, showing a client connection followed by a disconnection:例如,您可能对以下两个日志事件感兴趣,这两个事件显示了客户端连接后的断开连接:

{"t":{"$date":"2020-06-01T13:06:59.027-0500"},"s":"I", "c":"NETWORK", "id":22943,"ctx":"listener","msg":"connection accepted from {session_remote} #{session_id} ({connectionCount}{word} now open)","attr":{"session_remote":"127.0.0.1:61298","session_id":164,"connectionCount":11,"word":" connections"}}
{"t":{"$date":"2020-06-01T13:07:03.490-0500"},"s":"I", "c":"NETWORK", "id":22944,"ctx":"conn157","msg":"end connection {remote} ({connectionCount}{word} now open)","attr":{"remote":"127.0.0.1:61298","connectionCount":10,"word":" connections"}}

The log IDs for these two entries are 22943 and 22944 respectively. 这两个条目的日志ID分别为2294322944You could then filter your log output to show only these log IDs, effectively showing only client connection activity, using the following jq syntax:然后,您可以使用以下jq语法筛选日志输出以仅显示这些日志ID,从而有效地仅显示客户端连接活动:

jq '. | select( .id as $id | [22943, 22944] | index($id) )' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.有关本示例中显示的jq筛选器的更多信息,请参阅jq文档

Filtering by Date Range按日期范围筛选

Log output can be further refined by filtering on the timestamp field, limiting log entries returned to a specific date range. 日志输出可以通过筛选时间戳字段来进一步细化,将返回的日志条目限制在特定的日期范围内。For example, the following returns all log entries that occurred on April 15th, 2020:例如,以下返回发生在2020年4月15日的所有日志条目:

jq '. | select(.t["$date"] >= "2020-04-15T00:00:00.000" and .t["$date"] <= "2020-04-15T23:59:59.999")' /var/log/mongodb/mongod.log

Note that this syntax includes the full timestamp, including milliseconds but excluding the timezone offset.注意,该语法包括完整的时间戳,包括毫秒,但不包括时区偏移。

Filtering by date range can be combined with any of the examples above, creating weekly reports or yearly summaries for example. 按日期范围筛选可以与上述任何示例组合,例如创建周报告或年度总结。The following syntax expands the "Monitoring Connections" example from earlier to limit results to the month of May, 2020:以下语法将“监控连接”示例从早期扩展到2020年5月:

jq '. | select(.t["$date"] >= "2020-05-01T00:00:00.000" and .t["$date"] <= "2020-05-31T23:59:59.999" and .attr.remote)' /var/log/mongodb/mongod.log

Consult the jq documentation for more information on the jq filters shown in this example.有关本示例中显示的jq筛选器的更多信息,请参阅jq文档

Log Ingestion Services日志摄取服务

Log ingestion services are third-party products that intake and aggregate log files, usually from a distributed cluster of systems, and provide ongoing analysis of that data in a central location.日志摄取服务是第三方产品,通常从分布式系统集群获取和聚合日志文件,并在中心位置提供对数据的持续分析。

The JSON log format, introduced with MongoDB 4.4, allows for more flexibility when working with log ingestion and analysis services. MongoDB 4.4引入的JSON日志格式在处理日志摄取和分析服务时提供了更大的灵活性。Whereas plaintext logs generally require some manner of transformation before being eligible for use with these products, JSON files can often be consumed out of the box, depending on the service. 虽然明文日志通常需要某种方式的转换才能与这些产品一起使用,但JSON文件通常可以开箱即用,具体取决于服务。Further, JSON-formatted logs offer more control when performing filtering for these services, as the key-value structure offers the ability to specifically import only the fields of interest, while omitting the rest.此外,JSON格式的日志在为这些服务执行筛选时提供了更多的控制,因为键值结构提供了只导入感兴趣的字段的能力,而忽略了其他字段。

Consult the documentation for your chosen third-party log ingestion service for more information.有关更多信息,请参阅所选第三方日志摄取服务的文档。

Log Message Examples日志消息示例

The following examples show log messages in JSON output format.以下示例以JSON输出格式显示日志消息。

These log messages are presented in pretty-printed format for convenience.为了方便起见,这些日志消息以漂亮的打印格式显示。

Startup Warning启动警告

This example shows a startup warning:此示例显示启动警告:

{
  "t": {
    "$date": "2020-05-20T19:17:06.188+00:00"
  },
  "s": "W",
  "c": "CONTROL",
  "id": 22120,
  "ctx": "initandlisten",
  "msg": "Access control is not enabled for the database. Read and write access to data and configuration is unrestricted",
  "tags": [
    "startupWarnings"
  ]
}

Client Connection客户端连接

This example shows a client connection that includes client data:此示例显示了包含客户端数据的客户端连接:

{
  "t": {
    "$date": "2020-05-20T19:18:40.604+00:00"
  },
  "s": "I",
  "c": "NETWORK",
  "id": 51800,
  "ctx": "conn281",
  "msg": "client metadata",
  "attr": {
    "remote": "192.168.14.15:37666",
    "client": "conn281",
    "doc": {
      "application": {
        "name": "MongoDB Shell"
      },
      "driver": {
        "name": "MongoDB Internal Client",
        "version": "4.4.0"
      },
      "os": {
        "type": "Linux",
        "name": "CentOS Linux release 8.0.1905 (Core) ",
        "architecture": "x86_64",
        "version": "Kernel 4.18.0-80.11.2.el8_0.x86_64"
      }
    }
  }
}

Slow Operation慢操作

This example shows a slow operation message:此示例显示了一条慢速操作消息

{
  "t": {
    "$date": "2020-05-20T20:10:08.731+00:00"
  },
  "s": "I",
  "c": "COMMAND",
  "id": 51803,
  "ctx": "conn281",
  "msg": "Slow query",
  "attr": {
    "type": "command",
    "ns": "stocks.trades",
    "appName": "MongoDB Shell",
    "command": {
      "aggregate": "trades",
      "pipeline": [
        {
          "$project": {
            "ticker": 1,
            "price": 1,
            "priceGTE110": {
              "$gte": [
                "$price",
                110
              ]
            },
            "_id": 0
          }
        },
        {
          "$sort": {
            "price": -1
          }
        }
      ],
      "allowDiskUse": true,
      "cursor": {},
      "lsid": {
        "id": {
          "$uuid": "fa658f9e-9cd6-42d4-b1c8-c9160fabf2a2"
        }
      },
      "$clusterTime": {
        "clusterTime": {
          "$timestamp": {
            "t": 1590005405,
            "i": 1
          }
        },
        "signature": {
          "hash": {
            "$binary": {
              "base64": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",
              "subType": "0"
            }
          },
          "keyId": 0
        }
      },
      "$db": "test"
    },
    "planSummary": "COLLSCAN",
    "cursorid": 1912190691485054700,
    "keysExamined": 0,
    "docsExamined": 1000001,
    "hasSortStage": true,
    "usedDisk": true,
    "numYields": 1002,
    "nreturned": 101,
    "reslen": 17738,
    "locks": {
      "ReplicationStateTransition": {
        "acquireCount": {
          "w": 1119
        }
      },
      "Global": {
        "acquireCount": {
          "r": 1119
        }
      },
      "Database": {
        "acquireCount": {
          "r": 1119
        }
      },
      "Collection": {
        "acquireCount": {
          "r": 1119
        }
      },
      "Mutex": {
        "acquireCount": {
          "r": 117
        }
      }
    },
    "storage": {
      "data": {
        "bytesRead": 232899899,
        "timeReadingMicros": 186017
      },
      "timeWaitingMicros": {
        "cache": 849
      }
    },
    "remote": "192.168.14.15:37666",
    "protocol": "op_msg",
    "durationMillis": 22427
  }
}

Escaping转义

This example demonstrates character escaping, as shown in the setName field of the attribute object:此示例演示了字符转义,如属性对象的setName字段所示:

{
  "t": {
    "$date": "2020-05-20T19:11:09.268+00:00"
  },
  "s": "I",
  "c": "REPL",
  "id": 21752,
  "ctx": "ReplCoord-0",
  "msg": "Scheduling remote command request",
  "attr": {
    "context": "vote request",
    "request": "RemoteCommand 229 -- target:localhost:27003 db:admin cmd:{ replSetRequestVotes: 1, setName: \"my-replica-name\", dryRun: true, term: 3, candidateIndex: 0, configVersion: 2, configTerm: 3, lastAppliedOpTime: { ts: Timestamp(1589915409, 1), t: 3 } }"
  }
}

View视图

Starting in MongoDB 5.0, log messages for slow queries on views include a resolvedViews field that contains the view details:从MongoDB 5.0开始,视图慢速查询的日志消息包括一个包含视图详细信息的resolvedViews字段:

"resolvedViews": [ {
   "viewNamespace": <String>,  // namespace and view name
   "dependencyChain": <Array of strings>,  // view name and collection
   "resolvedPipeline": <Array of documents>  // aggregation pipeline for view
} ]

The following example uses the test database and creates a view named myView that sorts the documents in myCollection by the firstName field:以下示例使用test数据库并创建一个名为myView的视图,该视图按照firstName字段对myCollection中的文档进行排序:

use test
db.createView( "myView", "myCollection", [ { $sort: { "firstName" : 1 } } ] )

Assume a slow query is run on myView. 假设在myView上运行慢速查询The following example log message contains a resolvedViews field for myView:以下示例日志消息包含myViewresolvedViews字段:

{
   "t": {
      "$date": "2021-09-30T17:53:54.646+00:00"
   },
   "s": "I",
   "c": "COMMAND",
   "id": 51803,
   "ctx": "conn249",
   "msg": "Slow query",
   "attr": {
      "type": "command",
      "ns": "test.myView",
      "appName": "MongoDB Shell",
      "command": {
         "find": "myView",
         "filter": {},
         "lsid": {
            "id": { "$uuid": "ad176471-60e5-4e82-b977-156a9970d30f" }
         },
         "$db": "test"
      },
      "planSummary":"COLLSCAN",
         "resolvedViews": [ {
            "viewNamespace": "test.myView",
            "dependencyChain": [ "myView", "myCollection" ],
            "resolvedPipeline": [ { "$sort": { "firstName": 1 } } ]
         } ],
         "keysExamined": 0,
         "docsExamined": 1,
         "hasSortStage": true,
         "cursorExhausted": true,
         "numYields": 0,
         "nreturned": 1,
         "queryHash": "3344645B",
         "planCacheKey": "1D3DE690",
         "reslen": 134,
         "locks": { "ParallelBatchWriterMode": { "acquireCount": { "r": 1 } },
         "ReplicationStateTransition": { "acquireCount": { "w": 1 } },
         "Global": { "acquireCount": { "r": 4 } },
         "Database": { "acquireCount": {"r": 1 } },
         "Collection": { "acquireCount": { "r": 1 } },
         "Mutex": { "acquireCount": { "r": 4 } } },
         "storage": {},
         "remote": "127.0.0.1:34868",
         "protocol": "op_msg",
         "durationMillis": 0
      }
   }
}

Authorization批准

Starting in MongoDB 5.0, log messages for slow queries include a system.profile.authorization section. 从MongoDB 5.0开始,慢速查询的日志消息包括system.profile.authorization部分。These metrics help determine if a request is delayed because of contention for the user authorization cache.这些度量有助于确定请求是否因用户授权缓存争用而延迟。

"authorization": {
   "startedUserCacheAcquisitionAttempts": 1,
   "completedUserCacheAcquisitionAttempts": 1,
   "userCacheWaitTimeMicros": 508
 },
←  Legacy OpcodesExit Codes and Statuses →