Overview概述
As part of normal operation, MongoDB maintains a running log of events, including entries such as incoming connections, commands run, and issues encountered. Generally, log messages are useful for diagnosing issues, monitoring your deployment, and tuning performance.作为正常操作的一部分,MongoDB维护一个事件运行日志,包括传入连接、运行命令和遇到的问题等条目。通常,日志消息对于诊断问题、监控部署和调整性能非常有用。
To get your log messages, you can use any of the following methods:要获取日志消息,您可以使用以下任何方法:
View logs in your configured log destination.查看配置的日志目标中的日志。Run the运行getLogcommand.getLog命令。Download logs through MongoDB Atlas. To learn more, see Download Your Logs.通过MongoDB Atlas下载日志。要了解更多信息,请参阅下载日志。
Structured Logging结构化日志记录
mongod / mongos instances output all log messages in structured JSON format. mongod/mongos实例以结构化JSON格式输出所有日志消息。Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as "severity", and each corresponding value records the associated logging information for that field type, such as "informational". Previously, log entries were output as plaintext.日志条目以一系列键值对的形式编写,其中每个键表示一个日志消息字段类型,如“严重性”,每个相应的值记录该字段类型的相关日志信息,如“信息性”。以前,日志条目以明文形式输出。
Example示例
The following is an example log message in JSON format as it would appear in the MongoDB log file:以下是一个JSON格式的日志消息示例,它将出现在MongoDB日志文件中:
{"t":{"$date":"2020-05-01T15:16:17.180+00:00"},"s":"I", "c":"NETWORK", "id":12345, "ctx":"listener", "svc": "R", "msg":"Listening on", "attr":{"address":"127.0.0.1"}}
JSON log entries can be pretty-printed for readability. Here is the same log entry pretty-printed:JSON日志条目可以很好地打印出来以提高可读性。这是同样的日志条目,打印得很漂亮:
{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}
In this log entry, for example, the key 例如,在此日志条目中,表示严重性的键s, representing severity, has a corresponding value of I, representing "Informational", and the key c, representing component, has a corresponding value of NETWORK, indicating that the "network" component was responsible for this particular message. s具有表示“信息”的相应值I,表示组件的键c具有表示“网络”组件负责此特定消息的相应值NETWORK。The various field types are presented in detail in the Log Message Field Types section.日志消息字段类型部分详细介绍了各种字段类型。
Structured logging with key-value pairs allows for efficient parsing by automated tools or log ingestion services, and makes programmatic search and analysis of log messages easier to perform. Examples of analyzing structured log messages can be found in the Parsing Structured Log Messages section.具有键值对的结构化日志记录允许通过自动化工具或日志摄取服务进行高效解析,并使日志消息的程序化搜索和分析更容易执行。分析结构化日志消息的示例可以在解析结构化日志消息部分找到。
Note
The 如果mongod quits if it's unable to write to the log file. To ensure that mongod can write to the log file, verify that the log volume has space on the disk and the logs are rotated.mongod无法写入日志文件,它将退出。为了确保mongod可以写入日志文件,请验证日志卷在磁盘上是否有空间,以及日志是否被轮换。
JSON Log Output FormatJSON日志输出格式
All log output is in JSON format including output sent to:所有日志输出均为JSON格式,包括发送到以下地址的输出:
- Log file
- Syslog
Stdout (standard out)标准输出(standard out)log destinations日志目的地
Output from the getLog command is also in JSON format.getLog命令的输出也是JSON格式。
Each log entry is output as a self-contained JSON object which follows the Relaxed Extended JSON v2.0 specification, and has the following layout and field order:每个日志条目都作为一个自包含的JSON对象输出,该对象遵循Relaxed Extended JSON v2.0规范,并具有以下布局和字段顺序:
{
"t": <Datetime>, // timestamp
"s": <String>, // severity
"c": <String>, // component
"id": <Integer>, // unique identifier
"ctx": <String>, // context
"svc": <String>, // service
"msg": <String>, // message body
"attr": <Object>, // additional attributes (optional)
"tags": <Array of strings>, // tags (optional)
"truncated": <Object>, // truncation info (if truncated)
"size": <Object> // original size of entry (if truncated)
}
Field descriptions:字段描述:
t | ||
s | ||
c | ||
id | ||
ctx | ||
svc | S for "shard", R "router", or - for "unknown" or "none".S表示“分片”,R表示“路由器”,或-表示“未知”或“无”。 | |
msg | ||
attr | attr object is omitted.attr对象。msg message body, depending on the message. If necessary, the attributes are escaped according to the JSON specification.msg消息体中通过其键名引用。如有必要,将根据JSON规范转义属性。 | |
tags | ["startupWarnings"].["startupWarnings"]。 | |
truncated | attr attribute.attr属性时才包括在内。 | |
size | attr attribute.attr属性时才包括在内。 |
Escaping转义
The message and attributes fields will escape control characters as necessary according to the Relaxed Extended JSON v2.0 specification:根据Relaxed Extended JSON v2.0规范,消息和属性字段将根据需要转义控制字符:
") | \" |
\) | \\ |
0x08) | \b |
0x0C) | \f |
0x0A) | \n |
0x0D) | \r |
0x09) | \t |
Control characters not listed above are escaped with 上面未列出的控制字符用\uXXXX where "XXXX" is the unicode codepoint in hexadecimal. Bytes with invalid UTF-8 encoding are replaced with the unicode replacement character represented by \ufffd.\uXXXX转义,其中“XXXX”是十六进制的unicode代码点。UTF-8编码无效的字节将被替换为\ufffd表示的unicode替换字符。
An example of message escaping is provided in the examples section.示例部分提供了消息转义的示例。
Truncation截断
Changed in version 7.3.在版本7.3中的更改。
Any attributes that exceed the maximum size defined with 任何超过maxLogSizeKB (default: 10 KB) are truncated. Truncated attributes omit log data beyond the configured limit, but retain the JSON formatting of the entry to ensure that the entry remains parsable.maxLogSizeKB定义的最大大小(默认值:10 KB)的属性都将被截断。截断的属性省略了超出配置限制的日志数据,但保留了条目的JSON格式,以确保条目的可解析性。
For example, the following JSON object represents a 例如,以下JSON对象表示一个command attribute that contains 5000 elements in the $in field without truncation.command属性,该属性在$in字段中包含5000个元素,没有截断。
Note
The example log entries are reformatted for readability.为了可读性,对示例日志条目进行了重新格式化。
{
"command": {
"find": "mycoll",
"filter": {
"value1": {
"$in": [0, 1, 2, 3, ... 4999]
},
"value2": "foo"
},
"sort": { "value1": 1 },
"lsid":{"id":{"$uuid":"80a99e49-a850-467b-a26d-aeb2d8b9f42b"}},
"$db": "testdb"
}
}
In this example, the 在这个例子中,$in array is truncated at the 376th element because the size of the command attribute would exceed maxLogSizeKB if it included the subsequent elements. The remainder of the command attribute is omitted. The truncated log entry resembles the following output:$in数组在第376个元素处被截断,因为如果command属性包含后续元素,其大小将超过maxLogSizeKB。command属性的其余部分被省略。截断的日志条目类似于以下输出:
{
"t": { "$date": "2021-03-17T20:30:07.212+01:00" },
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn9",
"msg": "Slow query",
"attr": {
"command": {
"find": "mycoll",
"filter": {
"value1": {
"$in": [ 0, 1, ..., 376 ] // Values in array omitted for brevity为简洁起见,省略了数组中的值
}
}
},
... // Other attr fields omitted for brevity为简洁起见,省略了其他attr字段
},
"truncated": {
"command": {
"truncated": {
"filter": {
"truncated": {
"value1": {
"truncated": {
"$in": {
"truncated": {
"377": {
"type": "double",
"size": 8
}
},
"omitted": 4623
}
}
}
},
"omitted": 1
}
},
"omitted": 3
}
},
"size": {
"command": 21692
}
}
Log entries containing one or more truncated attributes include nested 包含一个或多个截断属性的日志条目包括嵌套的截断对象,这些对象为日志条目中的每个截断属性提供以下信息:truncated objects, which provide the following information for each truncated attribute in the log entry:
The attribute that was truncated被截断的属性The specific sub-object of that attribute that triggered truncation, if applicable触发截断的该属性的特定子对象(如果适用)The data截断字段的数据typeof the truncated fieldtypeThe触发截断的元素的size, in bytes, of the element that triggers truncationsize(以字节为单位)The number of elements that were由于截断而在每个子对象下omittedunder each sub-object due to truncationomitted的元素数量
Log entries with truncated attributes may also include an additional 具有截断属性的日志条目还可以在条目末尾包括一个额外的size field at the end of the entry which indicates the original size of the attribute before truncation, in this case 21692 or about 22KB. size字段,该字段指示截断前属性的原始大小,在这种情况下为21692或约22KB。This final 此最终size field is only shown if it is different from the size field in the truncated object, i.e. if the total object size of the attribute is different from the size of the truncated sub-object, as is the case in the example above.size字段仅在与truncated对象中的size字段不同时显示,即如果属性的总对象大小与截断的子对象大小不同,如上例所示。
Padding填充
When output to the file or the syslog log destinations, padding is added after the severity, context, and id fields to increase readability when viewed with a fixed-width font.当输出到文件或syslog日志目标时,在严重性、上下文和id字段后添加填充,以提高使用固定宽度字体查看时的可读性。
The following MongoDB log file excerpt demonstrates this padding:以下MongoDB日志文件摘录演示了这种填充:
{"t":{"$date":"2020-05-18T20:18:12.724+00:00"},"s":"I", "c":"CONTROL", "id":23285, "ctx":"main", "svc": "R", "msg":"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'"}
{"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"W", "c":"ASIO", "id":22601, "ctx":"main", "svc": "R", "msg":"No TransportLayer configured during NetworkInterface startup"}
{"t":{"$date":"2020-05-18T20:18:12.734+00:00"},"s":"I", "c":"NETWORK", "id":4648601, "ctx":"main", "svc": "R", "msg":"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize."}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"STORAGE", "id":4615611, "ctx":"initandlisten", "svc": "R", "msg":"MongoDB starting", "attr":{"pid":10111,"port":27001,"dbPath":"/var/lib/mongo","architecture":"64-bit","host":"centos8"}}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"CONTROL", "id":23403, "ctx":"initandlisten", "svc": "R", "msg":"Build Info", "attr":{"buildInfo":{"version":"4.4.0","gitVersion":"328c35e4b883540675fb4b626c53a08f74e43cf0","openSSLVersion":"OpenSSL 1.1.1c FIPS 28 May 2019","modules":[],"allocator":"tcmalloc","environment":{"distmod":"rhel80","distarch":"x86_64","target_arch":"x86_64"}}}}
{"t":{"$date":"2020-05-18T20:18:12.814+00:00"},"s":"I", "c":"CONTROL", "id":51765, "ctx":"initandlisten", "svc": "R", "msg":"Operating System", "attr":{"os":{"name":"CentOS Linux release 8.0.1905 (Core) ","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}Pretty Printing漂亮打印
When working with MongoDB structured logging, you can use the third-party jq command-line utility for easy pretty-printing of log entries, and powerful key-based matching and filtering.使用MongoDB结构化日志记录时,您可以使用第三方jq命令行实用程序轻松打印日志条目,以及强大的基于键的匹配和筛选。
jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.是一个开源JSON解析器,适用于Linux、Windows和macOS。
You can use 您可以使用jq to pretty-print log entries as follows:jq按如下方式漂亮地打印日志条目:
Pretty-print the entire log file:打印整个日志文件:cat mongod.log | jqPretty-print the most recent log entry:打印最近的日志条目:cat mongod.log | tail -1 | jq
More examples of working with MongoDB structured logs are available in the Parsing Structured Log Messages section.解析结构化日志消息部分提供了更多使用MongoDB结构化日志的示例。
Configuring Log Message Destinations配置日志消息目标
MongoDB log messages can be output to file, syslog, or stdout (standard output).MongoDB日志消息可以输出到文件、syslog或stdout(标准输出)。
To configure the log output destination, use one of the following settings, either in the configuration file or on the command-line:要配置日志输出目标,请在配置文件或命令行中使用以下设置之一:
Configuration file:配置文件:-
The文件或syslog的systemLog.destinationoption for file or syslogsystemLog.destination选项
Command-line:命令行:
Not specifying either file or syslog sends all logging output to stdout.不指定文件或syslog会将所有日志输出发送到stdout。
For the full list of logging settings and options see:有关日志设置和选项的完整列表,请参阅:
Configuration file:配置文件:Command-line:命令行:-
- Log options list for
mongod - Log options list for
mongos
- Log options list for
Note
Error messages sent to 发送到stderr (standard error), such as fatal errors during startup when not using the file or syslog log destinations, or messages having to do with misconfigured logging settings, are not affected by the log output destination setting, and are printed to stderr in plaintext format.stderr的错误消息(标准错误),例如启动时不使用文件或syslog日志目标时发生的致命错误,或与配置错误的日志设置有关的消息,不受日志输出目标设置的影响,并以明文格式打印到stderr。
Log Message Field Types日志消息字段类型
Timestamp时间戳
The timestamp field type indicates the precise date and time at which the logged event occurred.时间戳字段类型表示记录事件发生的精确日期和时间。
{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}
When logging to file or to syslog [1], the default format for the timestamp is 当记录到文件或syslog[1]时,时间戳的默认格式是iso8601-local. iso8601-local。To modify the timestamp format, use the 要修改时间戳格式,请使用--timeStampFormat runtime option or the systemLog.timeStampFormat setting.--timeStampFormat运行时选项或systemLog.timeStampFormat设置。
See Filtering by Date Range for log parsing examples that filter on the timestamp field.有关在时间戳字段上筛选的日志解析示例,请参阅按日期范围筛选。
Note
The 不再支持ctime timestamp format is no longer supported.ctime时间戳格式。
| [1] | syslog daemon generates timestamps when it logs a message, not when MongoDB issues the message. syslog,syslog守护进程会在记录消息时生成时间戳,而不是在MongoDB发出消息时生成。 |
Severity严重性
The severity field type indicates the severity level associated with the logged event.严重性字段类型表示与记录的事件相关的严重性级别。
{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}
Severity levels range from "Fatal" (most severe) to "Debug" (least severe):严重性级别从“致命”(最严重)到“调试”(最不严重):
F | Fatal |
E | Error
|
W | |
I | 00 |
D1 - D5 |
|
You can specify the verbosity level of various components to determine the amount of Informational and Debug messages MongoDB outputs. Severity categories above these levels are always shown. 您可以指定各种组件的详细程度,以确定MongoDB输出的信息和调试消息的数量。始终显示高于这些级别的严重性类别。[2] To set verbosity levels, see Configure Log Verbosity Levels.要设置详细程度,请参阅配置日志详细程度。
Components组件
The component field type indicates the category a logged event is a member of, such as NETWORK or COMMAND.组件字段类型表示记录的事件所属的类别,如网络或命令。
{
"t": {
"$date": "2020-05-01T15:16:17.180+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 12345,
"ctx": "listener",
"svc": "R",
"msg": "Listening on",
"attr": {
"address": "127.0.0.1"
}
}
Each component is individually configurable via its own verbosity filter. The available components are as follows:每个组件都可以通过自己的详细程度筛选器单独配置。可用组件如下:
ACCESSMessages related to access control, such as authentication. To specify the log level for与访问控制相关的消息,如身份验证。要指定ACCESScomponents, use thesystemLog.component.accessControl.verbositysetting.ACCESS组件的日志级别,请使用systemLog.component.accessControl.verbosity设置。
ASSERTAn assertion is triggered when an operation returns an error.当操作返回错误时,会触发断言。The default verbosity level is默认的详细程度为0. However, the verbosity setting must be at least1in order for operations that return errors to be included in the system logs.0。但是,为了将返回错误的操作包含在系统日志中,详细程度设置必须至少为1。To specify the log level for要指定ACCESScomponents, use theassertsetting.ACCESS组件的日志级别,请使用assert设置。
COMMANDMessages related to database commands, such as与数据库命令相关的消息,如count.count。To specify the log level for要指定COMMANDcomponents, use thesystemLog.component.command.verbositysetting.COMMAND组件的日志级别,请使用systemLog.component.command.verbosity设置。
CONTROLMessages related to control activities, such as initialization. To specify the log level for与控制活动相关的消息,如初始化。要指定CONTROLcomponents, use thesystemLog.component.control.verbositysetting.CONTROL组件的日志级别,请使用systemLog.component.control.verbosity设置。
ELECTIONMessages related specifically to replica set elections. To specify the log level for与副本集选举特别相关的消息。要指定ELECTIONcomponents, set thesystemLog.component.replication.election.verbosityparameter.ELECTION组件的日志级别,请设置systemLog.component.replication.election.verbosity参数。REPLis the parent component ofELECTION. IfsystemLog.component.replication.election.verbosityis unset, MongoDB uses theREPLverbosity level forELECTIONcomponents.REPL是ELECTION的父组件。如果未设置systemLog.component.replication.election.verbosity,MongoDB将对ELECTION组件使用REPLverbosity级别。
FTDCMessages related to the diagnostic data collection mechanism, such as server statistics and status messages. To specify the log level for与诊断数据集合机制相关的消息,如服务器统计信息和状态消息。要指定FTDCcomponents, use thesystemLog.component.ftdc.verbositysetting.FTDC组件的日志级别,请使用systemLog.component.ftdc.verbosity设置。
GEOMessages related to the parsing of geospatial shapes, such as verifying the GeoJSON shapes.与地理空间形状解析相关的消息,例如验证GeoJSON形状。To specify the log level for要指定GEOcomponents, set thesystemLog.component.geo.verbosityparameter.GEO组件的日志级别,请设置systemLog.component.geo.verbosity参数。
INDEXMessages related to indexing operations, such as creating indexes. To specify the log level for与索引操作(如创建索引)相关的消息。要指定INDEXcomponents, set thesystemLog.component.index.verbosityparameter.INDEX组件的日志级别,请设置systemLog.component.index.verbosity参数。
INITSYNCMessages related to initial sync operation. To specify the log level for与初始同步操作相关的消息。要指定INITSYNCcomponents, set thesystemLog.component.replication.initialSync.verbosityparameter.INITSYNC组件的日志级别,请设置systemLog.component.replication.initialSync.verbosity参数。REPLis the parent component ofINITSYNC. IfsystemLog.component.replication.initialSync.verbosityis unset, MongoDB uses theREPLverbosity level forINITSYNCcomponents.REPL是INITSYNC的父组件。如果未设置systemLog.component.replication.initialSync.verbosity,MongoDB将对INITSYNC组件使用REPLverbosity级别。
JOURNALMessages related specifically to storage journaling activities. To specify the log level for
JOURNALcomponents, use thesystemLog.component.storage.journal.verbositysetting.STORAGEis the parent component ofJOURNAL. IfsystemLog.component.storage.journal.verbosityis unset, MongoDB uses theSTORAGEverbosity level forJOURNALcomponents.
NETWORK- Messages related to network activities, such as accepting connections. To specify the log level for
NETWORKcomponents, set thesystemLog.component.network.verbosityparameter.
QUERY- Messages related to queries, including query planner activities. To specify the log level for
QUERYcomponents, set thesystemLog.component.query.verbosityparameter.
QUERYSTATS- Messages related to
$queryStatsoperations. To specify the log level forQUERYSTATScomponents, set thesystemLog.component.queryStats.verbosityparameter.
RECOVERYMessages related to storage recovery activities. To specify the log level for
RECOVERYcomponents, use thesystemLog.component.storage.recovery.verbositysetting.STORAGEis the parent component ofRECOVERY. IfsystemLog.component.storage.recovery.verbosityis unset, MongoDB uses theSTORAGEverbosity level forRECOVERYcomponents.
REJECTEDNew in version 8.0.在版本8.0中新增。Messages related to rejected query operations.
To specify the log level for
REJECTEDcomponent messages, set thesystemLog.component.query.rejected.verbosityparameter.MongoDB only logs the
REJECTEDcomponent messages if the verbosity level is set to2or higher.The parent component for
REJECTEDisQUERY.
REPLMessages related to replica sets, such as initial sync, heartbeats, steady state replication, and rollback. [2] To specify the log level for
REPLcomponents, set thesystemLog.component.replication.verbosityparameter.REPLis the parent component of theELECTION,INITSYNC,REPL_HB, andROLLBACKcomponents.
REPL_HBMessages related specifically to replica set heartbeats. To specify the log level for
REPL_HBcomponents, set thesystemLog.component.replication.heartbeats.verbosityparameter.REPLis the parent component ofREPL_HB. IfsystemLog.component.replication.heartbeats.verbosityis unset, MongoDB uses theREPLverbosity level forREPL_HBcomponents.
ROLLBACKMessages related to rollback operations. To specify the log level for
ROLLBACKcomponents, set thesystemLog.component.replication.rollback.verbosityparameter.REPLis the parent component ofROLLBACK. IfsystemLog.component.replication.rollback.verbosityis unset, MongoDB uses theREPLverbosity level forROLLBACKcomponents.
SHARDING- Messages related to sharding activities, such as the startup of the
mongos. To specify the log level forSHARDINGcomponents, use thesystemLog.component.sharding.verbositysetting.
STORAGEMessages related to storage activities, such as processes involved in the
fsynccommand. To specify the log level forSTORAGEcomponents, use thesystemLog.component.storage.verbositysetting.
TXN- Messages related to multi-document transactions. To specify the log level for
TXNcomponents, use thesystemLog.component.transaction.verbositysetting.
WRITE- Messages related to write operations, such as
updatecommands. To specify the log level forWRITEcomponents, use thesystemLog.component.write.verbositysetting.
WTNew in version 5.3.在版本5.3中新增。Messages related to the WiredTiger storage engine. To specify the log level for
WTcomponents, use thesystemLog.component.storage.wt.verbositysetting.
WTBACKUPNew in version 5.3.在版本5.3中新增。Messages related to backup operations performed by the WiredTiger storage engine. To specify the log level for the
WTBACKUPcomponents, use thesystemLog.component.storage.wt.wtBackup.verbositysetting.
WTCHKPTNew in version 5.3.在版本5.3中新增。Messages related to checkpoint operations performed by the WiredTiger storage engine. To specify the log level for
WTCHKPTcomponents, use thesystemLog.component.storage.wt.wtCheckpoint.verbositysetting.
WTCMPCTNew in version 5.3.在版本5.3中新增。Messages related to compaction operations performed by the WiredTiger storage engine. To specify the log level for
WTCMPCTcomponents, use thesystemLog.component.storage.wt.wtCompact.verbositysetting.
WTEVICTNew in version 5.3.在版本5.3中新增。Messages related to eviction operations performed by the WiredTiger storage engine. To specify the log level for
WTEVICTcomponents, use thesystemLog.component.storage.wt.wtEviction.verbositysetting.
WTHSNew in version 5.3.在版本5.3中新增。Messages related to the history store of the WiredTiger storage engine. To specify the log level for
WTHScomponents, use thesystemLog.component.storage.wt.wtHS.verbositysetting.
WTRECOVNew in version 5.3.在版本5.3中新增。Messages related to recovery operations performed by the WiredTiger storage engine. To specify the log level for
WTRECOVcomponents, use thesystemLog.component.storage.wt.wtRecovery.verbositysetting.
WTRTSNew in version 5.3.在版本5.3中新增。Messages related to rollback to stable (RTS) operations performed by the WiredTiger storage engine. To specify the log level for
WTRTScomponents, use thesystemLog.component.storage.wt.wtRTS.verbositysetting.
WTSLVGNew in version 5.3.在版本5.3中新增。Messages related to salvage operations performed by the WiredTiger storage engine. To specify the log level for
WTSLVGcomponents, use thesystemLog.component.storage.wt.wtSalvage.verbositysetting.
WTTSNew in version 5.3.在版本5.3中新增。Messages related to timestamps used by the WiredTiger storage engine. To specify the log level for
WTTScomponents, use thesystemLog.component.storage.wt.wtTimestamp.verbositysetting.
WTTXNNew in version 5.3.在版本5.3中新增。Messages related to transactions performed by the WiredTiger storage engine. To specify the log level for
WTTXNcomponents, use thesystemLog.component.storage.wt.wtTransaction.verbositysetting.
WTVRFYNew in version 5.3.在版本5.3中新增。Messages related to verification operations performed by the WiredTiger storage engine. To specify the log level for
WTVRFYcomponents, use thesystemLog.component.storage.wt.wtVerify.verbositysetting.
WTWRTLOGNew in version 5.3.在版本5.3中新增。Messages related to log write operations performed by the WiredTiger storage engine. To specify the log level for
WTWRTLOGcomponents, use thesystemLog.component.storage.wt.wtWriteLog.verbositysetting.
-- Messages not associated with a named component. Unnamed components have the default log level specified in the
systemLog.verbositysetting. ThesystemLog.verbositysetting is the default setting for both named and unnamed components.
See Filtering by Component for log parsing examples that filter on the component field.
Client Data
MongoDB Drivers and client applications (including mongosh) have the ability to send identifying information at the time of connection to the server. After the connection is established, the client does not send the identifying information again unless the connection is dropped and reestablished.
This identifying information is contained in the attributes field of the log entry. The exact information included varies by client.
Below is a sample log message containing the client data document as transmitted from a mongosh connection. The client data is contained in the doc object in the attributes field:
{"t":{"$date":"2020-05-20T16:21:31.561+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn202", "svc": "R", "msg":"client metadata", "attr":{"remote":"127.0.0.1:37106","client":"conn202","doc":{"application":{"name":"MongoDB Shell"},"driver":{"name":"MongoDB Internal Client","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}
When secondary members of a replica set initiate a connection to a primary, they send similar data. A sample log message containing this initiation connection might appear as follows. The client data is contained in the doc object in the attributes field:
{"t":{"$date":"2020-05-20T16:33:40.595+00:00"},"s":"I", "c":"NETWORK", "id":51800, "ctx":"conn214", "svc": "R", "msg":"client metadata", "attr":{"remote":"127.0.0.1:37176","client":"conn214","doc":{"driver":{"name":"NetworkInterfaceTL","version":"4.4.0"},"os":{"type":"Linux","name":"CentOS Linux release 8.0.1905 (Core) ","architecture":"x86_64","version":"Kernel 4.18.0-80.11.2.el8_0.x86_64"}}}}
See the examples section for a pretty-printed example showing client data.
For a complete description of client information and required fields, see the MongoDB Handshake specification.
Verbosity Levels
You can specify the logging verbosity level to increase or decrease the amount of log messages MongoDB outputs. Verbosity levels can be adjusted for all components together, or for specific named components individually.
Verbosity affects log entries in the severity categories Informational and Debug only. Severity categories above these levels are always shown.
You might set verbosity levels to a high value to show detailed logging for debugging or development, or to a low value to minimize writes to the log on a vetted production deployment. [2]
View Current Log Verbosity Level
To view the current verbosity levels, use the db.getLogComponents() method:
db.getLogComponents()
Your output might resemble the following:
{
"verbosity" : 0,
"accessControl" : {
"verbosity" : -1
},
"command" : {
"verbosity" : -1
},
...
"storage" : {
"verbosity" : -1,
"recovery" : {
"verbosity" : -1
},
"journal" : {
"verbosity" : -1
}
},
...
The initial verbosity entry is the parent verbosity level for all components, while the individual named components that follow, such as accessControl, indicate the specific verbosity level for that component, overriding the global verbosity level for that particular component if set.
A value of -1, indicates that the component inherits the verbosity level of their parent, if they have one (as with recovery above, inheriting from storage), or the global verbosity level if they do not (as with command). Inheritance relationships for verbosity levels are indicated in the components section.
Configure Log Verbosity Levels
You can configure the verbosity level using: the systemLog.verbosity and systemLog.component.<name>.verbosity settings, the logComponentVerbosity parameter, or the db.setLogLevel() method. [2]
systemLog Verbosity Settings
To configure the default log level for all components, use the systemLog.verbosity setting. To configure the level of specific components, use the systemLog.component.<name>.verbosity settings.
For example, the following configuration sets the systemLog.verbosity to 1, the systemLog.component.query.verbosity to 2, the systemLog.component.storage.verbosity to 2, and the systemLog.component.storage.journal.verbosity to 1:
systemLog:
verbosity: 1
component:
query:
verbosity: 2
storage:
verbosity: 2
journal:
verbosity: 1
You would set these values in the configuration file or on the command line for your mongod or mongos instance.
All components not specified explicitly in the configuration have a verbosity level of -1, indicating that they inherit the verbosity level of their parent, if they have one, or the global verbosity level (systemLog.verbosity) if they do not.
logComponentVerbosity Parameter
To set the logComponentVerbosity parameter, pass a document with the verbosity settings to change.
For example, the following sets the default verbosity level to 1, the query to 2, the storage to 2, and the storage.journal to 1.
db.adminCommand( {
setParameter: 1,
logComponentVerbosity: {
verbosity: 1,
query: {
verbosity: 2
},
storage: {
verbosity: 2,
journal: {
verbosity: 1
}
}
}
} )
You would set these values from mongosh.
db.setLogLevel()
Use the db.setLogLevel() method to update a single component log level. For a component, you can specify verbosity level of 0 to 5, or you can specify -1 to inherit the verbosity of the parent. For example, the following sets the systemLog.component.query.verbosity to its parent verbosity (i.e. default verbosity):
db.setLogLevel(-1, "query")
You would set this value from mongosh.
| [2] | (1, 2, 3, 4, 5) Secondary members of a replica set now log oplog entries that take longer than the slow operation threshold to apply. These slow oplog messages:
|
Logging Slow Operations
Client operations (such as queries) appear in the log if their duration exceeds the slow operation threshold or when the log verbosity level is 1 or higher. [2] These log entries include the full command object associated with the operation.
The profiler entries and the diagnostic log messages (i.e. mongod/mongos logmessages) for read/write operations include:
planCacheShapeHashto help identify slow queries with the same plan cache query shape.Starting in MongoDB 8.0, the existing
queryHashfield is duplicated in a new field namedplanCacheShapeHash. If you're using an earlier MongoDB version, you'll only see thequeryHashfield. Future MongoDB versions will remove the deprecatedqueryHashfield, and you'll need to use theplanCacheShapeHashfield instead.planCacheKeyto provide more insight into the query plan cache for slow queries.
Important
A single operation may log more than one entry. For example, if more than one write in a bulk write operation exceeds the slow operation threshold, each slow write is logged separately.
Version-Specific Changes
The following table lists the changes to logging slow queries.
| MongoDB Version | Changes |
|---|---|
| 6.1 | Slow operation log messages include cache refresh time fields. |
| 6.2 | Slow operation log messages include a
|
| 6.3 | Slow operation log messages and database profiler entries include a cpuNanos field that specifies the total CPU time spent by a query operation in nanoseconds. The cpuNanos field is only available on Linux systems. |
| 7.0 (and 6.0.13, 5.0.24) | The For example, consider the following writes with commit timestamps:
Suppose writeB commits first at Timestamp2. Replication is paused until writeA commits because writeA's oplog entry with Timestamp1 is required for replication to copy the oplog to secondary replica set members. |
| 8.0 | The slow query output includes a The If a command with a specific query shape is rejected, MongoDB logs a message that states the query command was rejected. The message contains the query namespace, |
| 8.1 | Slow query log messages contain new metrics if the query execution writes temporary files to disk. These metrics are prefixed by the query execution stage that caused the query to exceed the memory limit. For example,
For more information on writing temporary files to disk, see |
For a pretty-printed example of a slow operation log entry, see Log Message Examples.
Time Waiting for Shards Logged in remoteOpWaitMillis Field
New in version 5.0.在版本5.0中新增。
Starting in MongoDB 5.0, you can use the remoteOpWaitMillis log field to obtain the wait time (in milliseconds) for results from shards.
remoteOpWaitMillis is only logged:
- If you configure slow operations logging.
- On the shard or
mongosthat merges the results.
To determine if a merge operation or a shard issue is causing a slow query, compare the workingMillis and remoteOpWaitMillis time fields in the log. workingMillis is the total time the query took to complete. Specifically:
- If
workingMillisis slightly longer thanremoteOpWaitMillis, then most of the time was spent waiting for a shard response. For example,workingMillisof 17 andremoteOpWaitMillisof 15. - If
workingMillisis significantly longer thanremoteOpWaitMillis, then most of the time was spent performing the merge. For example,workingMillisof 100 andremoteOpWaitMillisof 15.
Log Redaction
Queryable Encryption Log Redaction
When using Queryable Encryption, CRUD operations against encrypted collections are omitted from the slow query log. For details, see Queryable Encryption redaction.
Enterprise Log Redaction
Available in MongoDB Enterprise only
A mongod or mongos running with redactClientLogData redacts any message accompanying a given log event before logging, leaving only metadata, source files, or line numbers related to the event. redactClientLogData prevents potentially sensitive information from entering the system log at the cost of diagnostic detail.
For example, the following operation inserts a document into a mongod running without log redaction. The mongod has the log verbosity level set to 1:
db.clients.insertOne( { "name" : "Joe", "PII" : "Sensitive Information" } )
This operation produces the following log event:
{
"t": { "$date": "2024-07-19T15:36:55.024-07:00" },
"s": "I",
"c": "COMMAND",
...
"attr": {
"type": "command",
...
"appName": "mongosh 2.2.10",
"command": {
"insert": "clients",
"documents": [
{
"name": "Joe",
"PII": "Sensitive Information",
"_id": { "$oid": "669aea8792c7fd822d3e1d8c" }
}
],
"ordered": true,
...
}
...
}
}
When mongod runs with redactClientLogData and performs the same insert operation, it produces the following log event:
{
"t": { "$date": "2024-07-19T15:36:55.024-07:00" },
"s": "I",
"c": "COMMAND",
...
"attr": {
"type": "command",
...
"appName": "mongosh 2.2.10",
"command": {
"insert": "###",
"documents": [
{
"name": "###",
"PII": "###",
"_id": "###"
}
],
"ordered": "###",
...
}
...
}
}
Use redactClientLogData in conjunction with Encryption at Rest and TLS/SSL (Transport Encryption) to assist compliance with regulatory requirements.
Parsing Structured Log Messages
Log parsing is the act of programmatically searching through and analyzing log files, often in an automated manner. With the introduction of structured logging, log parsing is made simpler and more powerful. For example:
- Log message fields are presented as key-value pairs. Log parsers can query by specific keys of interest to efficiently filter results.
- Log messages always contain the same message structure. Log parsers can reliably extract information from any log message, without needing to code for cases where information is missing or formatted differently.
The following examples demonstrate common log parsing workflows when working with MongoDB JSON log output.
Log Parsing Examples
When working with MongoDB structured logging, you can use the third-party jq command-line utility for easy pretty-printing of log entries, and powerful key-based matching and filtering.
jq is an open-source JSON parser, and is available for Linux, Windows, and macOS.
These examples use jq to simplify log parsing.
Counting Unique Messages
The following example shows the top 10 unique message values in a given log file, sorted by frequency:
jq -r ".msg" /var/log/mongodb/mongod.log | sort | uniq -c | sort -rn | head -10Monitoring Connections
Remote client connections are shown in the log under the "remote" key in the attribute object. The following counts all unique connections over the course of the log file and presents them in descending order by number of occurrences:
jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | sort | uniq -c | sort -r
Note that connections from the same IP address, but connecting over different ports, are treated as different connections by this command. You could limit output to consider IP addresses only, with the following change:
jq -r '.attr.remote' /var/log/mongodb/mongod.log | grep -v 'null' | awk -F':' '{print $1}' | sort | uniq -c | sort -rAnalyzing Driver Connections
The following example counts all remote MongoDB driver connections, and presents each driver type and version in descending order by number of occurrences:
jq -cr '.attr.doc.driver' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rnAnalyzing Client Types
The following example analyzes the reported client data of remote MongoDB driver connections and client applications, including mongosh, and prints a total for each unique operating system type that connected, sorted by frequency:
jq -r '.attr.doc.os.type' /var/log/mongodb/mongod.log | grep -v null | sort | uniq -c | sort -rn
The string "Darwin", as reported in this log field, represents a macOS client.
Analyzing Slow Queries
With slow operation logging enabled, the following returns only the slow operations that took above 2000 milliseconds:, for further analysis:
jq 'select(.attr.workingMillis>=2000)' /var/log/mongodb/mongod.log
Consult the jq documentation for more information on the jq filters shown in this example.
Filtering by Component
Log components (the third field in the JSON log output format) indicate the general category a given log message falls under. Filtering by component is often a great starting place when parsing log messages for relevant events.
The following example prints only the log messages of component type REPL:
jq 'select(.c=="REPL")' /var/log/mongodb/mongod.log
The following example prints all log messages except those of component type REPL:
jq 'select(.c!="REPL")' /var/log/mongodb/mongod.log
The following example print log messages of component type REPL or STORAGE:
jq 'select( .c as $c | ["REPL", "STORAGE"] | index($c) )' /var/log/mongodb/mongod.log
Consult the jq documentation for more information on the jq filters shown in this example.
Filtering by Known Log ID
Log IDs (the fifth field in the JSON log output format) map to specific log events, and can be relied upon to remain stable over successive MongoDB releases.
As an example, you might be interested in the following two log events, showing a client connection followed by a disconnection:
{"t":{"$date":"2020-06-01T13:06:59.027-0500"},"s":"I", "c":"NETWORK", "id":22943, "ctx":"listener", "svc": "R", "msg":"connection accepted from {session_remote} #{session_id} ({connectionCount}{word} now open)", "attr":{"session_remote":"127.0.0.1:61298", "session_id":164,"connectionCount":11,"word":" connections"}}
{"t":{"$date":"2020-06-01T13:07:03.490-0500"},"s":"I", "c":"NETWORK", "id":22944, "ctx":"conn157", "svc": "R", "msg":"end connection {remote} ({connectionCount}{word} now open)", "attr":{"remote":"127.0.0.1:61298","connectionCount":10,"word":" connections"}}
The log IDs for these two entries are 22943 and 22944 respectively. You could then filter your log output to show only these log IDs, effectively showing only client connection activity, using the following jq syntax:
jq 'select( .id as $id | [22943, 22944] | index($id) )' /var/log/mongodb/mongod.log
Consult the jq documentation for more information on the jq filters shown in this example.
Filtering by Date Range
Log output can be further refined by filtering on the timestamp field, limiting log entries returned to a specific date range. For example, the following returns all log entries that occurred on April 15th, 2020:
jq 'select(.t["$date"] >= "2020-04-15T00:00:00.000" and .t["$date"] <= "2020-04-15T23:59:59.999")' /var/log/mongodb/mongod.log
Note that this syntax includes the full timestamp, including milliseconds but excluding the timezone offset.
Filtering by date range can be combined with any of the examples above, creating weekly reports or yearly summaries for example. The following syntax expands the "Monitoring Connections" example from earlier to limit results to the month of May, 2020:
jq 'select(.t["$date"] >= "2020-05-01T00:00:00.000" and .t["$date"] <= "2020-05-31T23:59:59.999" and .attr.remote)' /var/log/mongodb/mongod.log
Consult the jq documentation for more information on the jq filters shown in this example.
Log Ingestion Services
Log ingestion services are third-party products that intake and aggregate log files, usually from a distributed cluster of systems, and provide ongoing analysis of that data in a central location.
The JSON log format allows for more flexibility when working with log ingestion and analysis services. Whereas plaintext logs generally require some manner of transformation before being eligible for use with these products, JSON files can often be consumed out of the box, depending on the service. Further, JSON-formatted logs offer more control when performing filtering for these services, as the key-value structure offers the ability to specifically import only the fields of interest, while omitting the rest.
Consult the documentation for your chosen third-party log ingestion service for more information.
Log Message Examples
The following examples show log messages in JSON output format.
These log messages are presented in pretty-printed format for convenience.
Startup Warning
This example shows a startup warning:
{
"t": {
"$date": "2020-05-20T19:17:06.188+00:00"
},
"s": "W",
"c": "CONTROL",
"id": 22120,
"ctx": "initandlisten",
"svc": "R",
"msg": "Access control is not enabled for the database. Read and write access to data and configuration is unrestricted",
"tags": [
"startupWarnings"
]
}Client Connection
This example shows a client connection that includes client data:
{
"t": {
"$date": "2020-05-20T19:18:40.604+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 51800,
"ctx": "conn281",
"svc": "R",
"msg": "client metadata",
"attr": {
"remote": "192.168.14.15:37666",
"client": "conn281",
"doc": {
"application": {
"name": "MongoDB Shell"
},
"driver": {
"name": "MongoDB Internal Client",
"version": "4.4.0"
},
"os": {
"type": "Linux",
"name": "CentOS Linux release 8.0.1905 (Core) ",
"architecture": "x86_64",
"version": "Kernel 4.18.0-80.11.2.el8_0.x86_64"
}
}
}
}Slow Operation
Starting in MongoDB 8.0, slow operations are logged based on the time that MongoDB spends working on that operation, rather than the total latency for the operation.
You can use the metrics in the slow operation log to identify where an operation spends time in its lifecycle, which helps identify possible performance improvements.
In the following example log message:
The amount of time spent waiting for resources while executing the query is shown in these metrics:
queues.execution.totalTimeQueuedMicrostimeAcquiringMicros
workingMillisis the amount of time that MongoDB spends working on the operation.durationMillisis the operation's total latency.
{
"t":{
"$date":"2024-06-01T13:24:10.034+00:00"
},
"s":"I",
"c":"COMMAND",
"id":51803,
"ctx":"conn3",
"msg":"Slow query",
"attr":{
"type":"command",
"isFromUserConnection":true,
"ns":"db.coll",
"collectionType":"normal",
"appName":"MongoDB Shell",
"command":{
"find":"coll",
"filter":{
"b":-1
},
"sort":{
"splitPoint":1
},
"readConcern":{ },
"$db":"db"
},
"planSummary":"COLLSCAN",
"planningTimeMicros":87,
"keysExamined":0,
"docsExamined":20889,
"hasSortStage":true,
"nBatches":1,
"cursorExhausted":true,
"numYields":164,
"nreturned":99,
"planCacheShapeHash":"9C05019A",
"planCacheKey":"C41063D6",
"queryFramework":"classic",
"reslen":96,
"locks":{
"ReplicationStateTransition":{
"acquireCount":{
"w":3
}
},
"Global":{
"acquireCount":{
"r":327,
"w":1
}
},
"Database":{
"acquireCount":{
"r":1
},
"acquireWaitCount":{
"r":1
},
"timeAcquiringMicros":{
"r":2814
}
},
"Collection":{
"acquireCount":{
"w":1
}
}
},
"flowControl":{
"acquireCount":1,
"acquireWaitCount":1,
"timeAcquiringMicros":8387
},
"readConcern":{
"level":"local",
"provenance":"implicitDefault"
},
"storage":{ },
"cpuNanos":20987385,
"remote":"127.0.0.1:47150",
"protocol":"op_msg",
"queues":{
"ingress":{
"admissions":7,
"totalTimeQueuedMicros":0
},
"execution":{
"admissions":328,
"totalTimeQueuedMicros":2109
}
},
"workingMillis":89,
"durationMillis":101
}
}
Starting in MongoDB 8.0, the existing queryHash field is duplicated in a new field named planCacheShapeHash. If you're using an earlier MongoDB version, you'll only see the queryHash field. Future MongoDB versions will remove the deprecated queryHash field, and you'll need to use the planCacheShapeHash field instead.
Escaping
This example demonstrates character escaping, as shown in the setName field of the attribute object:
{
"t": {
"$date": "2020-05-20T19:11:09.268+00:00"
},
"s": "I",
"c": "REPL",
"id": 21752,
"ctx": "ReplCoord-0",
"svc": "R",
"msg": "Scheduling remote command request",
"attr": {
"context": "vote request",
"request": "RemoteCommand 229 -- target:localhost:27003 db:admin cmd:{ replSetRequestVotes: 1, setName: \"my-replica-name\", dryRun: true, term: 3, candidateIndex: 0, configVersion: 2, configTerm: 3, lastAppliedOpTime: { ts: Timestamp(1589915409, 1), t: 3 } }"
}
}View
Starting in MongoDB 5.0, log messages for slow queries on views include a resolvedViews field that contains the view details:
"resolvedViews": [ {
"viewNamespace": <String>, // namespace and view name
"dependencyChain": <Array of strings>, // view name and collection
"resolvedPipeline": <Array of documents> // aggregation pipeline for view
} ]
The following example uses the test database and creates a view named myView that sorts the documents in myCollection by the firstName field:
use test
db.createView( "myView", "myCollection", [ { $sort: { "firstName" : 1 } } ] )
Assume a slow query is run on myView. The following example log message contains a resolvedViews field for myView:
{
"t": {
"$date": "2021-09-30T17:53:54.646+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn249",
"svc": "R",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "test.myView",
"appName": "MongoDB Shell",
"command": {
"find": "myView",
"filter": {},
"lsid": {
"id": { "$uuid": "ad176471-60e5-4e82-b977-156a9970d30f" }
},
"$db": "test"
},
"planSummary":"COLLSCAN",
"resolvedViews": [ {
"viewNamespace": "test.myView",
"dependencyChain": [ "myView", "myCollection" ],
"resolvedPipeline": [ { "$sort": { "firstName": 1 } } ]
} ],
"keysExamined": 0,
"docsExamined": 1,
"hasSortStage": true,
"cursorExhausted": true,
"numYields": 0,
"nreturned": 1,
"planCacheShapeHash": "3344645B",
"planCacheKey": "1D3DE690",
"queryFramework": "classic"
"reslen": 134,
"locks": { "ParallelBatchWriterMode": { "acquireCount": { "r": 1 } },
"ReplicationStateTransition": { "acquireCount": { "w": 1 } },
"Global": { "acquireCount": { "r": 4 } },
"Database": { "acquireCount": {"r": 1 } },
"Collection": { "acquireCount": { "r": 1 } },
"Mutex": { "acquireCount": { "r": 4 } } },
"storage": {},
"remote": "127.0.0.1:34868",
"protocol": "op_msg",
"workingMillis": 0,
"durationMillis": 0
}
}
}
Starting in MongoDB 8.0, the existing queryHash field is duplicated in a new field named planCacheShapeHash. If you're using an earlier MongoDB version, you'll only see the queryHash field. Future MongoDB versions will remove the deprecated queryHash field, and you'll need to use the planCacheShapeHash field instead.
Authorization
Starting in MongoDB 5.0, log messages for slow queries include a system.profile.authorization section. These metrics help determine if a request is delayed because of contention for the user authorization cache.
"authorization": {
"startedUserCacheAcquisitionAttempts": 1,
"completedUserCacheAcquisitionAttempts": 1,
"userCacheWaitTimeMicros": 508
},Session Workflow Log Message
Starting in MongoDB 6.3, a message is added to the log if the time to send an operation response exceeds the slowms threshold option.
The message is known as a session workflow log message and contains various times to perform an operation in a database session.
Example session workflow log message:
{
"t": {
"$date": "2022-12-14T17:22:44.233+00:00"
},
"s": "I",
"c": "EXECUTOR",
"id": 6983000,
"ctx": "conn1",
"svc": "R",
"msg": "Slow network response send time",
"attr": {
"elapsed": {
"totalMillis": 109,
"activeMillis": 30,
"receiveWorkMillis": 2,
"processWorkMillis": 10,
"sendResponseMillis": 22,
"yieldMillis": 15,
"finalizeMillis": 30
}
}
}
The times are in milliseconds.
A session workflow message is added to the log if sendResponseMillis exceeds the slowms threshold option.
totalMillis | Total time to perform the operation in the session, which includes the time spent waiting for a message to be received. |
activeMillis | Time between receiving a message and completing the operation associated with that message. Time includes sending a response and performing any clean up. |
receivedWorkMillis | Time to receive the operation information over the network. |
processWorkMillis | Time to process the operation and create the response. |
sendResponseMillis | Time to send the response. |
yieldMillis | Time between releasing the worker thread and the thread being used again. |
finalize | Time to end and close the session workflow. |
Connection Acquisition To Wire Log Message
Starting in MongoDB 6.3, a message is added to the log if the time that an operation waited between acquisition of a server connection and writing the bytes to send to the server over the network exceeds 1 millisecond.
By default, the message is logged at the "I" information level, and at most once every second to avoid too many log messages. If you must obtain every log message, change your log level to debug.
If the operation wait time exceeds 1 millisecond and the message is logged at the information level within the last second, then the next message is logged at the debug level. Otherwise, the next message is logged at the information level.
Example log message:
{
"t": {
"$date":"2023-01-31T15:22:29.473+00:00"
},
"s": "I",
"c": "NETWORK",
"id": 6496702,
"ctx": "ReplicaSetMonitor-TaskExecutor",
"svc": "R",
"msg": "Acquired connection for remote operation and completed writing to wire",
"attr": {
"durationMicros": 1683
}
}
The following table describes the durationMicros field in attr.
durationMicros | Time in microseconds that the operation waited between acquisition of a server connection and writing the bytes to send to the server over the network. |
Cache Refresh Times
Note
Cache refresh log fields are specific to sharded clusters and only appear in logs generated by the mongos router. They are not available in unsharded replica sets or standalone deployments.
Starting in MongoDB 6.1, log messages for slow queries include the following cache refresh time fields:
catalogCacheDatabaseLookupDurationMilliscatalogCacheCollectionLookupDurationMillisdatabaseVersionRefreshDurationMillisshardVersionRefreshMillis
Starting in MongoDB 7.0, log messages for slow queries also include the catalogCacheIndexLookupDurationMillis field that indicates the time that the operation spent fetching information from the index cache. This release also renames the shardVersionRefreshMillis field to placementVersionRefreshMillis.
The following example includes:
catalogCacheDatabaseLookupDurationMilliscatalogCacheCollectionLookupDurationMilliscatalogCacheIndexLookupDurationMillis
{
"t": {
"$date": "2023-03-17T09:47:55.929+00:00"
},
"s": "I",
"c": "COMMAND",
"id": 51803,
"ctx": "conn14",
"svc": "R",
"msg": "Slow query",
"attr": {
"type": "command",
"ns": "db.coll",
"appName": "MongoDB Shell",
"command": {
"insert": "coll",
"ordered": true,
"lsid": {
"id": {
"$uuid": "5d50b19c-8559-420a-a122-8834e012274a"
}
},
"$clusterTime": {
"clusterTime": {
"$timestamp": {
"t": 1679046398,
"i": 8
}
},
"signature": {
"hash": {
"$binary": {
"base64": "AAAAAAAAAAAAAAAAAAAAAAAAAAA=",
"subType": "0"
}
},
"keyId": 0
}
},
"$db": "db"
},
"catalogCacheDatabaseLookupDurationMillis": 19,
"catalogCacheCollectionLookupDurationMillis": 68,
"catalogCacheIndexLookupDurationMillis": 16026,
"nShards": 1,
"ninserted": 1,
"numYields": 232,
"reslen": 96,
"readConcern": {
"level": "local",
"provenance": "implicitDefault",
},
"cpuNanos": 29640339,
"remote": "127.0.0.1:48510",
"protocol": "op_msg",
"remoteOpWaitMillis": 4078,
"workingMillis": 20334,
"durationMillis": 20334
}
}Linux Syslog Limitations
In a Linux system, messages are subject to the rules defined in the Linux configuration file /etc/systemd/journald.conf. By default, log message bursts are limited to 1000 messages within a 30 second period. To see more messages, increase the RateLimitBurst parameter in /etc/systemd/journald.conf.
Download Your Logs下载日志
You can use MongoDB Atlas to download a zipped file containing the logs for a selected hostname or process in your database deployment. To learn more, see View and Download MongoDB Logs.您可以使用MongoDB Atlas下载一个压缩文件,其中包含数据库部署中选定主机名或进程的日志。要了解更多信息,请参阅查看和下载MongoDB日志。