Release Notes for MongoDB 4.4
On this page本页内容
- Patch Releases
- Full Time Diagnostic Data Capture Requirements
- Aggregation
- Replica Sets
- Sharded Clusters
- Projection
- Transactions
- Sorting
- Security Improvements
- Structured Logging
- Platform Support
- Mongo Shell
- Tools
- Drivers
- Indexes
- Removed Commands
- Networking
- General Improvements
- Changes Affecting Compatibility
- Upgrade Procedures
- Downgrade Consideration
- Download
- Known Issues
- Report an Issue
Past Release Limitations
Some past releases have critical issues. These releases are not recommended for production use. Use the latest available patch release version instead.
Issue | Affected Versions |
---|---|
WT-7426 | 4.4.5 |
WT-7984 | 4.4.8 |
WT-7995 | 4.4.2 - 4.4.8 |
WT-10461 | 4.4.0 - 4.4.18 (ARM64 or POWER system architectures) |
WT-10551 | 4.4.8 - 4.4.21 (Incremental backups on Ops Manager or Cloud Manager clusters) |
SERVER-58936 | 4.4.7 |
Patch Releases
4.4.23 - Jul 13, 2023
- SERVER-73943
Pin code pages in memory in memory constrained systems
- SERVER-75922
Partial unique indexes created on MongoDB 4.0 can be missing index keys after upgrade to 4.2 and later, leading to uniqueness violations
- SERVER-78126
For specific kinds of input, mongo::Value() always hashes to the same result on big-endian platforms
- All JIRA issues closed in 4.4.23
- 4.4.23 Changelog
4.4.22 - May 18, 2023
- SERVER-48196
Upgrade the timelib to the latest to update the built-in timezone files to the latest
- SERVER-57056
Syslog severity set incorrectly for INFO messages
- WT-10551
Incremental backup may omit modified blocks
- All JIRA issues closed in 4.4.22
- 4.4.22 Changelog
4.4.21 - April 27, 2023
Issues fixed:
- SERVER-75261
"listCollections" command fails with BSONObjectTooLarge error
- SERVER-76098
Allow queries with $search and non-simple collations
- All JIRA issues closed in 4.4.21
- 4.4.21 Changelog
4.4.20 - Apr 10, 2023
Issues fixed:
- SERVER-51835
Mongos readPreferenceTags are not working as expected
- SERVER-74345
mongodb-org-server 4.4.19, 5.0.15, 6.0.5 not starting after upgrading from older version (Debian, RPM Packages)
- SERVER-75205
Deadlock between stepdown and restoring locks after yielding when all read tickets exhausted
- WT-9500
Fix RTS to use cell time window instead of key/value timestamps of HS update
- All JIRA issues closed in 4.4.20
- 4.4.20 Changelog
4.4.19 - Feb 27, 2023
Issues fixed:
- SERVER-68122
Investigate replicating the collection WiredTiger config string during initial sync
- SERVER-71759
dataSize command doesn't yield
- SERVER-72222
MapReduce with single reduce optimization fails when merging results in sharded cluster
- SERVER-72535
Sharded clusters allow creating the 'admin', 'local', and 'config' databases with alternative casings
- SERVER-70235
Don't create range deletion documents upon v4.2-v4.4 upgrade in case of collection uuid mismatch
- WT-9599
Acquire the hot backup lock to call fallocate in the block manager
- All JIRA issues closed in 4.4.19
- 4.4.19 Changelog
4.4.18 - Nov 21, 2022
Issues fixed:
- SERVER-66289
$out incorrectly throws BSONObj size error on v5.0.8
- SERVER-61185
Use prefix_search for unique index lookup
- SERVER-68115
Bug fix for "elemMatchRootLength > 0" invariant trigger
- SERVER-50454
Avoiding sending the "keyValue" field to drivers on duplicate key error
- SERVER-69443
[4.4] Allow speculative majority reads in multi-doc txns when --enableMajorityReadConcern=false
- All JIRA issues closed in 4.4.18
- 4.4.18 Changelog
4.4.17 - Sep 28, 2022
Issues fixed:
- SERVER-68925
Reintroduce check table logging settings at startup (revert SERVER-43664)
- SERVER-56127
Retryable update may execute more than once if chunk is migrated and shard key pattern uses nested fields
- SERVER-64142
Add new enforceUniqueness to refineCollectionShardKey command
- SERVER-65382
AutoSplitVector should not use clientReadable to reorder shard key fields
- SERVER-61275
Destruct the size storer after the session cache has shutdown
- WT-9870
Fix updating pinned timestamp whenever oldest timestamp is updated during recovery
- All JIRA issues closed in 4.4.17
- 4.4.17 Changelog
4.4.16 - Aug 19, 2022
Issues fixed:
- SERVER-67302
"Reading from replicated collection without read timestamp or PBWM lock" crash with clock changes
- SERVER-61321
Improve handling of large/NaN values for text index version
- SERVER-60607
Improve handling of large/NaN values for geo index version
- SERVER-66418
Bad projection created during dependency analysis due to string order assumption
- WT-9096
Fix search near returning wrong key/value sometimes when key doesn't exist
- All JIRA issues closed in 4.4.16
- 4.4.16 Changelog
4.4.15 - Jun 21, 2022
Issues fixed:
- SERVER-66433
Backport deadline waiting for overlapping range deletion to finish to pre-v5.1 versions
- SERVER-65821
Deadlock during setFCV when there are prepared transactions that have not persisted commit/abort decision
- SERVER-65131
Disable opportunistic read targeting (except for hedged reads)
- SERVER-62272
Adding schema validation to a collection can prevent chunk migrations of failing documents
- SERVER-54900
Blocking networking calls can delay sync-source resolution indefinitely
- All JIRA issues closed in 4.4.15
- 4.4.15 Changelog
4.4.14 - May 9, 2022
Issues fixed:
- SERVER-64983
Release Client lock before rolling back WT transaction in TransactionParticipant::_resetTransactionState
- SERVER-62229
Fix invariant when applying index build entries while recoverFromOplogAsStandalone=true
- SERVER-60412
Host memory limit check does not honor cgroups v2
- SERVER-55429
Abort migration earlier when receiver is not cleaning overlapping ranges
- WT-8924
Don't check against on disk time window if there is an insert list when checking for conflicts in row-store
- All JIRA issues closed in 4.4.14
- 4.4.14 Changelog
4.4.13 - Mar 7, 2022
Issues fixed:
- SERVER-63203
Chunk splitter never splits if more than 8192 split points are found
- SERVER-62065
Upgrade path from 3.6 to 4.0 can leave chunk entries without history on the shards
- SERVER-59754
Incorrect logging of queryHash/planCacheKey for operations that share the same $lookup shape
- SERVER-55483
Add a new startup parameter that skips verifying the table log settings
- SERVER-40691
$nin:[...] queries are not indexed
- All JIRA issues closed in 4.4.13
- 4.4.13 Changelog
4.4.12 - Jan 21, 2022
Issues fixed:
- SERVER-62203
Change the thread name "Health checks progress monitor" to "FaultManagerProgressMonitor"
- SERVER-61930
Individual health observers should return an error if a timeout period elapses when doing a single health check
- SERVER-61637
Review range deleter batching policy
- SERVER-59362
Setup Fault Manager State Machine
- All JIRA issues closed in 4.4.12
- 4.4.12 Changelog
4.4.11 - Dec 30, 2021
Issues fixed:
- WT-8395
Inconsistent data after upgrade from 4.4.3 and 4.4.4 to 4.4.8+ and 5.0.2+
- SERVER-60326
Windows Server fails to start when X509 certificate has empty subject name
- SERVER-60310
OCSP response validation should not consider statuses of irrelevant certificates
- SERVER-59226
Deadlock when stepping down with a profile session marked as uninterruptible
- SERVER-56226
[v4.4] Introduce 'permitMigrations' field on config.collections entry to prevent chunk migrations from committing
- SERVER-51329
Unexpected non-retryable error when shutting down a mongos server
- SERVER-45953
Exempt oplog readers from acquiring read tickets
- All JIRA issues closed in 4.4.11
- 4.4.11 Changelog
4.4.10 - Oct 15, 2021
Issues fixed:
- SERVER-59876
: Large delays in returning from libcrypto.so while establishing egress connections
- SERVER-59867
: Split horizon mappings in ReplSetConfig/MemberConfig should be serialized deterministically
- SERVER-59456
: Start the LDAPReaper threadpool
- SERVER-59074
: Do not acquire storage tickets just to set/wait on oplog visibility
- SERVER-54064
: Sessions on arbiters accumulate and cannot be cleared out
- All JIRA issues closed in 4.4.10
- 4.4.10 Changelog
4.4.9 - Sep 21, 2021
Issues fixed:
- SERVER-57630
: Enable SSL_OP_NO_RENEGOTIATION on Ubuntu 18.04 when running against OpenSSL 1.1.1
- SERVER-34938
: Secondary slowdown or hang due to content pinned in cache by single oplog batch
- WT-8005
: Fix a prepare commit bug that could leave the history store entry unresolved
- WT-7995
: Fix the global visibility that it cannot go beyond checkpoint visibility
- WT-7984
: Fix a bug that could cause a checkpoint to omit a page of data
- All JIRA issues closed in 4.4.9
- 4.4.9 Changelog
4.4.8 - Aug 4, 2021
Issues fixed:
- SERVER-58936
: Unique index constraints may not be enforced
- SERVER-58258
: Wait for initial sync to clear state before asserting 'replSetGetStatus' reply has no 'initialSync' field
- SERVER-52906
: moveChunk after failed migration that rolled back cloning indexes can hang indefinitely due to missing shard key index
- WT-7837
: Clear updates structure in wt_hs_insert_updates to avoid firing assert
- WT-6729
: Quiesce eviction prior running rollback to stable's active transaction check
- All JIRA issues closed in 4.4.8
- 4.4.8 Changelog
4.4.7 - Jul 16, 2021
Issues fixed:
- SERVER-57476
: Operation may block on prepare conflict while holding oplog slot, stalling replication indefinitely
- SERVER-56054
: Change minThreads value for replication writer thread pool to 0
- SERVER-53760
: $unwind + $sort pipeline produces large number of file handles when spilling to disk
- SERVER-47699
: Change yield type used by range deleter from YIELD_MANUAL to YIELD_AUTO
- WT-7185
: Avoid aborting a transaction if it is force evicting and oldest
- All JIRA issues closed in 4.4.7
- 4.4.7 Changelog
4.4.6 - May 10, 2021
Issues fixed:
- SERVER-53604
: Include original aws iam arn in authenticate audit logs
- SERVER-52564
: Deadlock between step down and MongoDOperationContextSession
- WT-7442
: RTS to open dhandle only when the dhandle has unstable updates
- WT-7426
: Set write generation number when the page image gets created
- WT-7373
: Improve slow random cursor operations on oplog
- All JIRA issues closed in 4.4.6
- 4.4.6 Changelog
4.4.5 - Apr 8, 2021
Issues fixed:
- SERVER-55298
: Reproduce and Investigate BSONObjectTooLarge error
- SERVER-53566
: Investigate and reproduce "opCtx != nullptr && _opCtx == nullptr" invariant
- SERVER-51281
: mongod live locked
- SERVER-46686
: Explain does not respect maxTimeMS
- SERVER-45836
: Provide more LDAP details (like server IP) at default log level
- All JIRA issues closed in 4.4.5
- 4.4.5 Changelog
4.4.4 - Feb 16, 2021
Issues fixed:
- SERVER-48471
: Hashed indexes may be incorrectly marked multikey and be ineligible as a shard key
- SERVER-50769
: server restarted after expr{"expr":"_currentApplyOps.getArrayLength() > 0","file":"src/mongo/db/pipeline/document_source_change_stream_transform.cpp","line":535}}
- SERVER-52919
: Wire compression not enabled for initial sync
- WT-7109
: Retain no longer supported configuration options for backward compatibility
- WT-7117
: RTS to skip modifies that are more recent than on-disk base update while restoring an update
- All JIRA issues closed in 4.4.4
- 4.4.4 Changelog
4.4.3 - Jan 4, 2021
Issues fixed:
- SERVER-33966
: redundant $sort in aggregation prevents best $limit $sort consolidation
- SERVER-40361
: Reduce memory footprint of plan cache entries
- SERVER-52654
: new signing keys not generated by the monitoring-keys-for-HMAC thread
- SERVER-52824
: Support AWS roles with paths
- SERVER-52929
: Correctly handle compound indexes with 32 keys
- All JIRA issues closed in 4.4.3
- 4.4.3 Changelog
4.4.2 - Nov 18, 2020
Issues fixed:
- SERVER-48067
: Reduce memory consumption for unique index builds with large numbers of non-unique keys
- SERVER-48523
: Unconditionally check the first entry in the oplog when attempting to resume a change stream
- SERVER-50365
: Stuck with long-running transactions that can't be timed out
- SERVER-50394
: mongod audit log attributes DDL operations to the __system user in a sharded environment
- SERVER-51041
: Throttle starting transactions for secondary reads
- SERVER-51120
: Find queries with MERGE_SORT incorrectly sort the results when the collation is specified
- All JIRA issues closed in 4.4.2
- 4.4.2 Changelog
4.4.1 - Sep 9, 2020
Issues fixed:
- SERVER-48531
: 3 way deadlock can happen between chunk splitter, prepared transactions and stepdown thread.
- SERVER-48641
: Deadlock due to the MigrationDestinationManager waiting for write concern with the session checked-out
- SERVER-49546
: setFCV to 4.4 should insert range deletion tasks in batches rather than one at a time
- SERVER-49694
: On a sharded cluster, nearest or hedged reads may not be routed to a near shard replica.
- SERVER-50137
: MongoDB crashes with Invariant failure due to oplog entries generated in 3.4
- SERVER-50140
: Initial sync cannot survive unclean restart of the sync source
- SERVER-50170
: Fix server selection failure on mongos
- WT-6623
: Set the connection level file ID in recovery file scan
- All JIRA issues closed in 4.4.1
- 4.4.1 Changelog
Full Time Diagnostic Data Capture Requirements
Starting in version 4.4, if the Full Time Diagnostic Data Capture (FTDC) thread in mongod
or mongos
fails, it terminates the originating process. To avoid the most common failures, confirm that the user running mongod
/mongos
has permissions to create the FTDC diagnostic.data
directory within storage.dbPath
(for mongod
) or parallel to systemLog.path
(for mongos
).
Aggregation
Union All ($unionWith
Stage)
MongoDB 4.4 adds the $unionWith
aggregation stage, providing the ability to combines pipeline results from multiple collections into a single result set.
For details, see $unionWith
.
Custom Aggregation Expressions
Starting in version 4.4, MongoDB provides the following new operators that allow users to define custom aggregation expressions:
With the addition of these new operators, you can use aggregation to write custom JavaScript expressions instead of relying on mapReduce
and $where
.
Even before version 4.4, various map-reduce expressions could also be rewritten using other aggregation pipeline operators, such as $group
, $merge
, etc., without requiring custom functions.
For more information, see Map-Reduce to Aggregation Pipeline.
New Aggregation Operators
Operator | Description |
---|---|
$accumulator | Returns the result of a user-defined accumulator operator. |
$binarySize | Returns the size of a given string or binary data value's content in bytes. |
$bsonSize | Returns the size in bytes of a given document (i.e. bsontype Object ) when encoded as BSON. |
$function | Defines a custom aggregation expression. |
$isNumber | Returns boolean true if the specified expression resolves to an integer , decimal , double , or long .Returns boolean false if the expression resolves to any other BSON type, null , or a missing field
|
$replaceOne | Replaces the first instance of a matched string in a given input. |
$replaceAll | Replaces all instances of a matched string in a given input. |
General Aggregation Improvements
$out
Starting in MongoDB 4.4:
$out
can output to a collection in a different database. In earlier versions,$out
can only output to a collection in the same database database where the aggregation is run.$out
can only run on replica set secondary nodes if all the nodes in cluster have featureCompatibilityVersion set to4.4
or higher and the Read Preference allows secondary reads. Check your driver documentation to see when your driver added support.
$indexStats
Starting in MongoDB 4.4 (also available starting in 4.2.4), $indexStats
includes the following fields in its output:
Field | Description |
---|---|
shard | Name of the shard, if applicable. |
spec | Index specification document |
building | A boolean flag that indicates if the index is currently being built. |
$merge
Starting in MongoDB 4.4:
$merge
can output to a collection in a different database. In earlier versions,$merge
can only output to a collection in the same database where the aggregation is run.$merge
can only run on replica set secondary nodes if all the nodes in cluster have featureCompatibilityVersion set to4.4
or higher and the Read Preference allows secondary reads. Check your driver documentation to see when your driver added support.
Starting in MongoDB 4.4, $merge
can output to the same collection that is being aggregated. You can also output to a collection which appears in other stages of the pipeline, such as $lookup
.
Versions of MongoDB prior to 4.4 did not allow $merge
to output to the same collection as the collection being aggregated.
When $merge
outputs to the same collection that is being aggregated, documents may get updated multiple times or the operation may result in an infinite loop. This behavior occurs when the update performed by $merge
changes the physical location of documents stored on disk. When the physical location of a document changes, $merge
may view it as an entirely new document, resulting in additional updates. For more information on this behavior, see Halloween Problem.
$planCacheStats
Changes
Starting in version 4.4,
$planCacheStats
stage can be run onmongos
instances as well as onmongod
instances. In 4.2,$planCacheStats
stage can only run onmongod
instances.$planCacheStats
includes new fields: the host field and, when run against amongos
, the shard field.mongo
shell provides the methodPlanCache.list()
as a wrapper for$planCacheStats
aggregation stage.- MongoDB removes the following:
planCacheListPlans
andplanCacheListQueryShapes
commands, andPlanCache.getPlansByQuery()
andPlanCache.listQueryShapes()
methods.
Use
$planCacheStats
orPlanCache.list()
instead.
$collStats
Changes
Starting in MongoDB 4.4, $collStats
accepts the queryExecStats
field as an argument document. Providing this field returns the following fields in the output:
The collectionScans
field contains an embedded document bearing the following fields:
Field Name | Description |
---|---|
total | A 64-bit integer giving the total number of queries that performed a collection scan. The total consists of queries that did and did not use a tailable cursor. |
nonTailable | A 64-bit integer giving the number of queries that performed a collection scan that did not use a tailable cursor. |
explain
Changes
Starting in MongoDB 4.4, when you run the db.collection.explain().aggregate()
method in executionStats
and allPlansExecution
modes, each pipeline stage listed in the explain output includes nReturned
and executionTimeMillisEstimate
.
Replica Sets
Resumable Initial Sync
Starting in MongoDB 4.4, a secondary performing initial sync can attempt to resume the sync process if interrupted by a transient (i.e. temporary) network error, collection drop, or collection rename. The sync source must also run MongoDB 4.4 to support resumable initial sync. If the sync source runs MongoDB 4.2 or earlier, the secondary must restart the initial sync process as if it encountered a non-transient network error.
By default, the secondary tries to resume initial sync for 24 hours. MongoDB 4.4 adds the initialSyncTransientErrorRetryPeriodSeconds
server parameter for controlling the amount of time the secondary attempts to resume initial sync. If the secondary cannot successfully resume the initial sync process during the configured time period, it selects a new healthy source from the replica set and restarts the initial synchronization process from the beginning.
Prior to MongoDB 4.4, the secondary would restart the entire initial sync if it encountered an error during the process.
Streaming Replication
Starting in MongoDB 4.4, sync from sources send a continuous stream of oplog entries to their syncing secondaries.
Prior to MongoDB 4.4, secondaries fetched batches of oplog entries by issuing a request to their sync from source and waiting for a response. This required a network roundtrip for each batch of oplog entries. MongoDB 4.4 adds the oplogFetcherUsesExhaust
startup parameter for disabling streaming replication and using the older replication behavior.
For details, see Streaming Replication.
Rollback Directory
Starting in Mongo 4.4, the rollback directory for a collection is named after the collection's UUID rather than the collection namespace; e.g.
<dbpath>/rollback/20f74796-d5ea-42f5-8c95-f79b39bad190/removed.2020-02-19T04-57-11.0.bson
For details, see Rollback Data.
Minimum Oplog Retention Period
Starting in MongoDB 4.4, you can specify the minimum number of hours to preserve an oplog entry. The mongod
only removes an oplog entry if:
- The oplog has reached the maximum configured size, and
- The oplog entry is older than the configured number of hours based on the host system clock.
By default MongoDB does not set a minimum oplog retention period and automatically truncates the oplog starting with the oldest entries to maintain the configured maximum oplog size.
To configure the minimum oplog retention period when starting the mongod
, either:
- Add the
storage.oplogMinRetentionHours
setting to themongod
configuration file.-or-
- Add the
--oplogMinRetentionHours
command line option.
To configure the minimum oplog retention period on a running mongod
, use replSetResizeOplog
. Setting the minimum oplog retention period while the mongod
is running overrides any values set on startup. You must update the value of the corresponding configuration file setting or command line option to persist those changes through a server restart.
The oplog can grow without constraint so as to retain oplog entries for the configured number of hours. This may result in reduction or exhaustion of system disk space due to a combination of high write volume and large retention period.
See also:
slowOpSampleRate
Affects Secondary Logs
Starting in MongoDB 4.4, slow oplog application logs on replica set secondaries are affected by the slowOpSampleRate
. In previous versions, MongoDB logs all slow oplog entries regardless of the sample rate.
slowOpSampleRate
specifies the fraction of slow operations that should be profiled or logged.
Indexes Build Simultaneously on Data-Bearing Replica Set Members
Requires featureCompatibilityVersion 4.4+
Each mongod
in the replica set or sharded cluster must have featureCompatibilityVersion set to at least 4.4
to start index builds simultaneously across replica set members.
MongoDB 4.4 running featureCompatibilityVersion: "4.2"
builds indexes on the primary before replicating the index build to secondaries.
Starting with MongoDB 4.4, index builds on a replica set or sharded cluster build simultaneously across all data-bearing replica set members. For sharded clusters, the index build occurs only on shards containing data for the collection being indexed. The primary requires a minimum number of data-bearing voting
members (i.e commit quorum), including itself, that must complete the build before marking the index as ready for use.
By default, index builds use a commit quorum of all data-bearing voting members. To start an index build with a non-default commit quorum, MongoDB 4.4 adds the commitQuorum parameter to createIndexes
or its shell helpers db.collection.createIndex()
and db.collection.createIndexes()
.
To modify the quorum required for an in-progress index build, MongoDB 4.4 introduces the new setIndexCommitQuorum
command.
See Index Builds in Replicated Environments for more information.
Replica Set Reconfiguration Changes
Changes to replSetReconfig
Starting in MongoDB 4.4, the replSetReconfig
command waits until a majority of voting members install the replica configuration before returning success. A voting member is any replica member where members[n].votes
is 1
, including arbiters. First, the operation waits until the current configuration is committed before installing the new configuration on the primary. The operation then waits until a majority of voting members install the new configuration before returning successfully. See Reconfiguration Waits Until a Majority of Members Install the Replica Configuration for more information.
replSetReconfig
waits indefinitely for a majority of voting members to install the configuration by default. MongoDB 4.4 also adds the optional maxTimeMS parameter to replSetReconfig
for specifying the maximum amount of time to wait for the operation to return successfully.
Starting in MongoDB 4.4, the replSetReconfig
command allows adding or removing no more than 1
voting
member at a time. To add or remove multiple voting members, issue a series of replSetReconfig
or rs.reconfig()
operations to add or remove one member at a time. See Reconfiguration Can Add or Remove No More than One Voting Member at a Time for more information.
Changes to replSetGetConfig
The replSetGetConfig
command can specify a new option commitmentStatus: true when run on the primary. When run with the option, the command includes in the output a commitmentStatus field. This output field indicates whether the replica set's previous reconfig has been committed, so that the replica set is ready to be reconfigured again. For more information, see the replSetGetConfig command.
Changes to Replica Configuration Document
MongoDB 4.4 adds the term
field to the replica set configuration document. Replica set members use term
and version
to achieve consensus on the "newest" replica configuration. Setting featureCompatibilityVersion (fCV) : "4.4" implicitly performs a replSetReconfig
to add the term
field to the configuration document and blocks until the new configuration propagates to a majority of replica set members. Similarly, downgrading to fCV : "4.2"
implicitly performs a reconfiguration to remove the term
field.
Preferred Initial Sync Source
Starting in MongoDB 4.4, you can specify the preferred initial sync source using the initialSyncSourceReadPreference
parameter. You can only set this parameter on mongod
startup, using either the setParameter
configuration file setting or the --setParameter
command line option.
initialSyncSourceReadPreference
supports following read preference modes:
primary
primaryPreferred
(Default for voting replica set members)secondary
secondaryPreferred
nearest
(Default for newly added or non-voting replica set members)
If the replica set has disabled chaining
, the default initialSyncSourceReadPreference
read preference mode is primary
.
You cannot specify a tag set or maxStalenessSeconds
to initialSyncSourceReadPreference
.
See also:
Mirrored Reads
Starting in version 4.4, MongoDB provides mirrored reads to pre-warm electable secondary members' cache with the most recently accessed data. With mirrored reads, the primary can mirror a subset of operations that it receives and send them to a subset of electable secondaries. Pre-warming the cache of a secondary can help restore performance more quickly after an election.
The primary's response to the client is not affected by the mirror reads. The mirrored reads are "fire-and-forget" operations by the primary; i.e., the primary does not await the response for the mirrored reads.
Mirrored Reads Parameter
MongoDB 4.4 adds the following mirrored reads parameter. You can set the parameter at startup using the setParameter
configuration file setting or the --setParameter
command line option or at runtime with setParameter
command:
Parameter | Description |
---|---|
mirrorReads | Specifies the samplingRate and maxTimeMS settings for mirrored reads.{ samplingRate: <float>, maxTimeMS: <int> } A samplingRate of 0 turns off mirrored reads.
|
Mirrored Reads Metrics
The command serverStatus
and its corresponding mongo
shell method db.serverStatus()
return mirroredReads
if you specify the field's inclusion in the operation:
db.runCommand( { serverStatus: 1, mirroredReads: 1 } )
or
db.serverStatus( { mirroredReads: 1 } )
Sharded Clusters
Refinable Shard Keys
Starting in 4.4, MongoDB provides the refineCollectionShardKey
command. With the new command, you can refine a collection's shard key by adding a suffix field or fields to the existing key. Refining a collection's shard key allows for a more fine-grained data distribution and can address situations where the existing key has led to jumbo (i.e. indivisible)
chunks due to insufficient cardinality.
For example, you may have an existing orders
collection with the shard key { customer_id: 1 }
. You can change the shard key by adding a suffix order_id
field to the shard key so that {
customer_id: 1, order_id: 1 }
becomes the new shard key, allowing data distribution by both customer_id
and order_id
fields.
To use the refineCollectionShardKey
command, the sharded cluster must have feature compatibility version (fcv) of 4.4
. For more information, see the refineCollectionShardKey
command.
After you refine the shard key, it may be that not all documents in the collection have the suffix field(s). To populate the missing shard key field(s), see Missing Shard Key Fields.
Before refining the shard key, ensure that all or most documents in the collection have the suffix fields, if possible, to avoid having to populate the field afterwards.
In earlier versions, once you select a shard key, you cannot modify the shard key.
Missing Shard Keys
With the ability to refine a shard key with a suffix, it may be that not all documents in the collection have the suffix fields. Starting in version 4.4, documents in a sharded collection can be missing the shard key fields. In earlier versions, shard key fields must exist in every document for a sharded collection. For details, see Missing Shard Key Fields.
Hedged Reads
To minimize latencies, mongos
instances, by default, can use hedge reads. With hedged reads, the mongos
instances can route read operations to multiple members per each queried shard and return results from the first respondent per shard. By default, mongos
instances support using hedged reads. To turn off a mongos
instance's support for hedged reads, set the readHedgingMode
parameter for the mongos
.
Hedged reads are specified per operation as part of the read preference. Non-primary
read preferences support hedged reads. Read preference nearest
specifies hedged read by default.
For more information, see:
Hedged Read Parameters
Parameter | Description |
---|---|
readHedgingMode | Enables or disables mongos instance's support for hedged reads. |
maxTimeMSForHedgedReads | Specifies the maximum time limit (in milliseconds) for the additional read sent to hedge a read operation. |
Hedged Read Option for Read Preference
To specify hedged read for a read preference, MongoDB 4.4 introduces the Hedged Read Option. To set using a MongoDB driver, refer to the driver read preference API documentation.
The following mongo
shell methods can accept hedge options to enable hedged read for the specified read preference:
Hedged Read Metrics
The command serverStatus
and its corresponding mongo
shell method db.serverStatus()
return hedgingMetrics
.
balancerCollectionStatus
Command
MongoDB 4.4 provides the command balancerCollectionStatus
and the mongo
shell helper method sh.balancerCollectionStatus()
that return information about whether the chunks of a sharded collection are balanced (i.e. do not need to be moved) as of the time the command is run or need to be moved. With the command, users can verify that initial chunk creation and migration has finished.
Improved mongos
Startup Procedure
Starting with MongoDB 4.4, mongos
adds the following new default startup behavior:
mongos
preloads a sharded cluster's routing table on startup, rather than doing so on-demand for the first incoming client connection.mongos
prewarms its connection pool to shard hosts on startup, rather than doing so on-demand for incoming client connections.
This behavior results in faster servicing of initial client connections after a mongos
instance is started or restarted. In particular, this allows sites that employ multiple mongos
instances to restart them as necessary, or add new ones, without initial client requests to those instances needing to wait on connection establishment.
Both routing table preloading and connection pool prewarming are enabled by default.
MongoDB 4.4 adds the following parameters for controlling this behavior:
loadRoutingTableOnStartup
- Default:
true
(Enabled) - Enables or disables support for preloading the routing table on startup for the
mongos
.
- Default:
warmMinConnectionsInShardingTaskExecutorPoolOnStartup
- Default:
true
(Enabled) - Enables or disables support for prewarming the connection pool on startup for the
mongos
.
- Default:
warmMinConnectionsInShardingTaskExecutorPoolOnStartupWaitMS
- Default:
2000
(2 seconds) - Sets the timeout in milliseconds before client connections to the
mongos
are allowed regardless of established connection pool size.
- Default:
Improved Routing Table Updates
Running flushRouterConfig
is no longer required after executing the movePrimary
or dropDatabase
commands. These two commands now automatically refresh a sharded cluster's routing table as needed when run. Manually issuing the flushRouterConfig
command is still recommended in the cases described under flushRouterConfig Considerations.
Compound Hashed Shard Keys
Starting in MongoDB 4.4, you can shard a collection using a compound shard key with a single hashed field. Prior to 4.4, MongoDB did not support compound shard keys with a hashed field.
Compound hashed sharding supports features like zone sharding, where the prefix (i.e. first) non-hashed field or fields support zone ranges while the hashed field supports more even distribution of the sharded data. For example, the following operation shards a collection on a compound hashed shard key that supports zoned sharding:
sh.shardCollection(
"examples.compoundHashedCollection",
{ "fieldA" : 1, "fieldB" : 1, "fieldC" : "hashed" }
)
Compound hashed sharding also supports shard keys with a hashed prefix for resolving data distribution issues related to monotonically increasing fields For example, the following operation shards a collection on a compound hashed shard key where the hashed field is the shard key prefix:
sh.shardCollection(
"examples.compoundHashedCollection",
{ "_id" : "hashed", "fieldA" : 1}
)
See also:
Chunk Migration Failover Resiliency Improvements
Starting in MongoDB 4.4, the following changes improve chunk migrations and orphaned document cleanup resiliency during failover:
- Chunk ranges awaiting cleanup after a chunk migration are now persisted in the
config.rangeDeletions
collection and replicated throughout the shard. In the event of a failover, the shard's new primary reads the documents in theconfig.rangeDeletions
collection and resumes deleting the corresponding ranges. The document that describes a range awaiting cleanup is deleted from theconfig.rangeDeletions
collection after the range is deleted. - The
cleanupOrphaned
command no longer deletes orphaned documents from a shard. Instead,cleanupOrphaned
waits for orphaned documents that are scheduled for deletion from a shard to be deleted.
Set the disableResumableRangeDeleter
parameter to true
on a shard's primary to pause range deletion on the shard.
General Sharded Clusters Improvements
Index Consistency Checks
Starting in MongoDB 4.4, the config server primary, by default, checks for index inconsistencies across the shards for sharded collections. The command serverStatus
returns the field shardedIndexConsistency
to report on index inconsistencies when run on the config server primary.
To configure the index consistency checks, MongoDB provides the following parameters:
Parameter | Description |
---|---|
enableShardedIndexConsistencyCheck | Enable or disable the index consistency checks. |
shardedIndexConsistencyCheckIntervalMS | The interval at which the config server's primary checks the index consistency of sharded collections. |
Concurrent removeShard
Operations
Starting in MongoDB 4.4, you can have more than one removeShard
operation in progress.
In earlier versions, removeShard
returns an error if another removeShard
operation is in progress.
Shard Key Limit
Starting in version 4.4, MongoDB removes the 512-byte limit on the shard key size. For MongoDB 4.2 and earlier, a shard key cannot exceed 512 bytes.
Partial Results
Starting in 4.4, if the find
or subsequent getMore
commands return partial results due to the unavailability of the queried shard(s), the output includes a boolean flag partialResultsReturned
.
Jumbo Chunk Migration
For chunks that are too large to migrate, starting in MongoDB 4.4:
- A new balancer setting
attemptToBalanceJumboChunks
allows the balancer to migrate chunks too large to move as long as the chunks are not labeled jumbo. See Balance Ranges that Exceed Size Limit for details. - The
moveChunk
command can specify a new option forceJumbo to allow for the migration of chunks that are too large to move. The chunks may or may not be labeled jumbo.
Improved Catalog Cache Refresh
Starting in 4.4, if there is a stale chunk, the catalog cache is only refreshed when routers access a shard that previously had or currently has that chunk.
Prior to MongoDB 4.4, any stale chunk caused the entire chunk distribution for a collection to be marked as stale and forced all routers who contact the shard to refresh their shard catalog cache. MongoDB 4.4 adds the enableFinerGrainedCatalogCacheRefresh
startup parameter for disabling catalog cache refresh for only a targeted shard and using the older catalog cache refresh behavior. The enableFinerGrainedCatalogCacheRefresh
parameter defaults to true
.
Automatically Split system.sessions
Collection
Starting in version 4.4 (and 4.2.7), MongoDB automatically splits the system.sessions
collection into at least 1024 chunks and distributes the chunks uniformly across shards in the cluster.
Projection
Starting in MongoDB 4.4, as part of making find()
and findAndModify()
projection consistent with aggregation's $project
stage,
- The
find()
andfindAndModify()
projection can accept aggregation expressions and aggregation syntax, including the use of literals and aggregation variables. With the use of aggregation expressions and syntax, you can project new fields or project existing fields with new values. - The
find()
andfindAndModify()
projection can specify embedded fields using the nested form; e.g.{ field: { nestedfield: 1 } }
as well as dot notation. In earlier versions, you can only use the dot notation.
For more information, see:
db.collection.find()
db.collection.findOneAndDelete()
db.collection.findOneAndReplace()
db.collection.findOneAndUpdate()
db.collection.findAndModify()
See also:
$meta
Operator
$meta
Keyword Support
Starting in MongoDB 4.4, the $meta
operator adds support for retrieving the indexKey
metadata. The indexKey
metadata is for debugging purposes only and not for application logic. See $meta
for more information.
{ $meta: "textScore" }
Usage with find()
Starting in version 4.4, MongoDB makes the following { $meta: "textScore" }
changes when used with db.collection.find()
:
- You must specify the
$text
operator in the query predicate to use{ $meta: "textScore" }
. - You can sort the resulting documents by their search relevance, i.e.
{ $meta: "textScore" }
, without also having to project thetextScore
.In earlier versions, to include{ $meta: "textScore" }
expression in thesort()
, you must also include the same expression in the projection. - If you include the
{ $meta: "textScore" }
expression in both the projection and sort, i.e.db.collection.find(<$text search>, <projection>).sort(<sort>)
, the projection and sort documents can have different field names for the expression.In previous versions of MongoDB, if you include the{ $meta: "textScore" }
in both the projection and sort, you must specify the same field name in both places.
For more information, see Text Score Metadata $meta: "textScore"
. For examples of "textScore"
projections and sorts, see Text Search Score Examples.
See: Text Search Metadata { $meta: "textScore" } Query Requirement
Transactions
Starting in MongoDB 4.4 with feature compatibility version (fcv) "4.4"
, you can create collections and indexes inside a multi-document transaction unless the transaction is a cross-shard write transaction.
When creating a collection inside a transaction:
- You can implicitly create a collection, such as with:
- an insert operation against a non-existing collection;
- an update/findAndModify operation with
upsert: true
against a non-existing collection.
- You can explicitly create a collection using the
create
command or its helperdb.createCollection()
. - The
db.createCollection()
method fails if executed against a system collection.
When creating an index inside a transaction:
- You can create an index on a non-existing collection. The collection is created as part of the operation.
- You can create an index on a new empty collection created earlier in the same transaction.
- The
db.collection.createIndex()
method fails if executed against a system collection.
For more details, see Create Collections and Indexes In a Transaction.
MongoDB 4.4 adds a new parameter shouldMultiDocTxnCreateCollectionAndIndexes
which can enable (default) or disable collection and index creation inside a transaction. When setting the parameter for a sharded cluster, set the parameter on all shards.
For explicit creation of a collection or an index inside a transaction, the transaction read concern level must be "local"
. Explicit creation is through:
Command | Method |
---|---|
create | db.createCollection() |
createIndexes |
See also:
Sorting
$sort
Changes
Starting in MongoDB 4.4, the sort()
method now uses the same sort algorithm as the $sort
aggregation stage. With this change, queries which perform a sort()
on fields that contain duplicate values are much more likely to result in inconsistent sort orders for those values.
To guarantee sort consistency when using sort()
on duplicate values, include an additional field in your sort that contains exclusively unique values.
This can be accomplished easily by adding the _id
field to your sort.
See Sort Consistency for more information.
Security Improvements
New Kerberos Validation Tool mongokerberos
MongoDB Enterprise 4.4 provides a new mongokerberos
tool for validating your platform's Kerberos configuration for use with MongoDB, and for testing end-to-end client authentication through Kerberos. When run, mongokerberos
returns a report indicating any issues encountered, and provides potential advice for resolving them. mongokerberos
is available in MongoDB Enterprise only.
OCSP
Starting in version 4.4, MongoDB enables, by default, the use of OCSP (Online Certificate Status Protocol) to check for certificate revocation. The use of OCSP eliminates the need to periodically download a Certificate Revocation List (CRL)
and restart the mongod
/ mongos
with the updated CRL.
In versions 4.0 and 4.2, the use of OCSP is available only through the use of system certificate store
on Windows or macOS.
See also:
OCSP Stapling/Must Staple
As part of its OCSP support, MongoDB 4.4 supports the following on Linux:
- OCSP stapling
. With OCSP stapling,
mongod
andmongos
instances attach or "staple" the OCSP status response to their certificates when providing these certificates to clients during the TLS/SSL handshake. By including the OCSP status response with the certificates, OCSP stapling obviates the need for clients to make a separate request to retrieve the OCSP status of the provided certificates. - OCSP must-staple extension
. OCSP must-staple is an extension that can be added to the server certificate that tells the client to expect an OCSP staple when it receives a certificate during the TLS/SSL handshake.
OCSP Parameters
MongoDB 4.4 adds the following OCSP parameters. You can set these parameters at startup using the setParameter
configuration file setting or the --setParameter
command line option:
Parameter | Description |
---|---|
ocspEnabled | Enables or disables the OCSP support. |
ocspValidationRefreshPeriodSecs | Specifies the number of seconds to wait before refreshing the stapled OCSP status response. |
tlsOCSPStaplingTimeoutSecs | Specifies the maximum number of seconds the mongod / mongos instance should wait to receive the OCSP status response for its certificates. |
tlsOCSPVerifyTimeoutSecs | Specifies the maximum number of seconds that the mongod / mongos should wait for the OCSP response when verifying client certificates. |
x.509 Certificates Nearing Expiry Trigger Warnings
Starting in MongoDB 4.4, mongod
/ mongos
logs a warning on connection if the presented x.509 certificate expires within 30
days of the mongod/mongos
system clock. Specifically, the following connections to a mongod
or mongos
can trigger x.509 certificate expiry warnings:
- A
mongo
shell or an application using a MongoDB driver establishing a TLS connection or performing x.509 client authentication with a certificate expiring in less than 30 days. (i.e. the certificate specified to--tlsCertificateKeyFile
ortlsCertificateKeyFile
). - A
mongod
cluster member performing x.509 membership authentication with a certificate expiring in less than 30 days. (i.e. the certificate specified tonet.tls.clusterFile
,net.tls.clusterCertificateSelector
,mongod --tlsClusterFile
ormongod --tlsClusterCertificateSelector
). - A
mongos
cluster member performing x.509 membership authentication with a certificate expiring within 30 days. (i.e. the certificate specified to (i.e. the certificate specified tonet.tls.clusterFile
,net.tls.clusterCertificateSelector
,mongos --tlsClusterFile
ormongos --tlsClusterCertificateSelector
).
The warning log message resembles the following:
<Timestamp> W NETWORK [connection] Peer certificate <Certificate Subject Name> expires...
Consider proactively renewing client x.509 certificates nearing expiration to ensure continued connectivity to the cluster.
MongoDB 4.4 adds the tlsX509ExpirationWarningThresholdDays
parameter for controlling certificate expiration warning threshold. Set the parameter to 0
to disable the warning. For complete documentation, see tlsX509ExpirationWarningThresholdDays
.
TLS 1.3 Support
On CentOS 8 and RHEL 8, MongoDB 4.4 (as well as versions 4.2, 4.0, and 3.6) support TLS1.3.
User to DN Mapping Exits on Network or Authentication Failures
A mongod
, mongos
, or mongoldap
returns an error if one of the user to Distinguished Name (DN) mappings cannot be evaluated due to networking or authentication failures to the LDAP server.
The mongod
, mongos
, or mongoldap
rejects the connection request and does not check the remaining mappings, if any.
To specify the user to DN mapping, see:
Structured Logging
Starting in MongoDB 4.4, mongod
/ mongos
instances now output all log messages in structured JSON format. Log entries are written as a series of key-value pairs, where each key indicates a log message field type, such as "severity", and each corresponding value records the associated logging information for that field type, such as "informational".
This includes log output sent to the file, syslog, and stdout (standard out) log destinations, as well as the output of the getLog
command.
Previously, log entries were output as plaintext.
The following log messages in JSON format indicate that a mongod
is listening and ready for connections:
{"t":{"$date":"2020-05-18T20:18:13.533+00:00"},"s":"I", "c":"NETWORK", "id":23015, "ctx":"listener","msg":"Listening on","attr":{"address":"127.0.0.1"}}
{"t":{"$date":"2020-05-18T20:18:13.533+00:00"},"s":"I", "c":"NETWORK", "id":23016, "ctx":"listener","msg":"Waiting for connections","attr":{"port":27001,"ssl":"off"}}
Structured logging with key-value pairs allows for efficient log analysis by automated tools or log ingestion services, and makes programmatic log parsing easier and more powerful.
When working with MongoDB structured logging, the third-party jq command-line utility is a useful tool that allows for easy pretty-printing of log entries, and powerful key-based matching and filtering.
jq
is an open-source JSON parser, and is available for Linux, Windows, and macOS.
For more information on structured logging, including a detailed examination of log entry components as well as command-line parsing examples, see Log Messages.
Multiple LDAP Password Support
Starting in MongoDB 4.4, the ldapQueryPassword
setParameter
command accepts either a string or an array of strings. If set to an array, each password is tried until one succeeds. This can be used to perform a rollover of the LDAP account password without downtime for MongoDB.
Platform Support
Added Platforms
MongoDB 4.4 adds support for the following platforms:
Removed Platforms
MongoDB 4.4 removes support for the following platforms:
- Amazon Linux 2013.03
- RHEL 6 / CentOS 6 / Oracle 6 on the s390x architecture
- Windows 7 / Server 2008 R2
- Windows 8 / Server 2012
- Windows 8.1 / Server 2012 R2
- macOS 10.12
See Platform Support for the full list of platforms and architectures supported in MongoDB 4.4.
Mongo Shell
Mongo Shell Supports AWS IAM Credentials for Atlas Clusters
Starting in MongoDB 4.4, the mongo
shell supports using AWS IAM credentials to authenticate to a MongoDB Atlas
cluster that has been configured for AWS IAM authentication.
Authenticating in this manner uses the new MONGODB-AWS
authentication mechanism
, and requires that you provide an AWS access key ID and a secret access key, which may be specified in the connection string or on the command-line via the --username
and --password
options.
Additionally, if you are using an AWS session token for authentication with temporary credentials when using an AssumeRole
request, or when working with AWS resources that specify this value such as Lambda, you may provide that session token in the connection string using the
AWS_SESSION_TOKEN
authMechanismProperties
value, or on the command-line via the --awsIamSessionToken
option.
Alternatively, if the AWS access key ID, secret access key, or session token are defined on your platform using their respective AWS IAM environment variables the
mongo
shell uses these environment variable values to authenticate; you do not need to specify them in the connection string.
See Connection String Authentication Options for usage, and Connecting to an Atlas Cluster using MONGODB-AWS for examples.
Tools
Migration to MongoDB Database Tools Project
Starting in MongoDB 4.4, the documentation for the following tools have been migrated to the MongoDB Database Tools project:
The MongoDB Database Tools use the Apache License, Version 2.0. See mongodb/mongo-tools for the source code.
For documentation on previous versions of the listed tools, reference that version of the MongoDB server manual.
Quick links to older documentation:
New mongokerberos
Kerberos Validation Tool
MongoDB Enterprise 4.4 provides a new mongokerberos
tool for validating your platform's Kerberos configuration for use with MongoDB, and for testing end-to-end client authentication through Kerberos. When run, mongokerberos
returns a report indicating any issues encountered, and provides potential advice for resolving them. mongokerberos
is available in MongoDB Enterprise only.
See the mongokerberos
reference page for more information.
mongoreplay
Removed from MongoDB Packaging
Starting in MongoDB 4.4, mongoreplay
is removed from MongoDB packaging. mongoreplay
and its related documentation are migrated to the mongodb-labs github project. Projects in
mongodb-labs
are experimental and not officially supported by MongoDB.
Quick links to older documentation
MongoDB Database Tools Not Packaged with Windows MSI
Starting in version 4.4, the Windows MSI installer for both Community and Enterprise editions does not include the MongoDB Database Tools (mongoimport
, mongoexport
, etc). To download and install the MongoDB Database Tools on Windows, see Installing the MongoDB Database Tools.
If you were relying on the MongoDB 4.2 or previous MSI installer to install the Database Tools along with the MongoDB Server, you must now download the Database Tools separately.
Drivers
New Drivers
- The official MongoDB Rust driver is now available.
- The official MongoDB Swift driver is now available.
Indexes
Compound Hashed Indexes
MongoDB 4.4 adds support for creating compound indexes with a single hashed field. MongoDB 4.2 and earlier only supported single field hashed indexes.
The following operation creates a compound hashed index on country
and _id
:
db.examples.createIndex( { "country" : 1, "_id" : "hashed" } )
Compound hashed indexes require featureCompatibilityVersion set to 4.4
.
See also:
Hidden Indexes
Starting in version 4.4, MongoDB adds the ability to hide or unhide indexes from the query planner. An index hidden from the query planner is not evaluated as part of query plan selection.
By hiding an index from the planner, users can evaluate the potential impact of dropping an index without having to drop the index. If the impact is negative, the user can unhide the index instead of having to recreate a dropped index. And because indexes are fully maintained while hidden, hidden indexes are immediately available for use once unhidden.
For details, see Hidden Indexes.
To support Hidden indexes, MongoDB introduces:
- A new index option
hidden
. The option can be specified in the following operations:- createIndexes command
- db.collection.createIndex() and db.collection.createIndexes() methods
collMod
command
- New
mongo
shell helper methods to hide or unhide existing indexes:
dropIndexes
Can Abort In-Progress Index Builds
If an index specified to dropIndexes
is still building, dropIndexes
attempts to abort the in-progress build. Aborting an index build has the same effect as dropping the built index. Prior to MongoDB 4.4, dropIndexes
would return an error if the collection had any in-progress index builds. This behavior also applies to the shell helpers db.collection.dropIndex()
and db.collection.dropIndexes()
.
- The indexes specified to
dropIndexes
/dropIndexes()
must be the entire set of in-progress builds associated to a given index builder, i.e. the indexes built by a singlecreateIndexes
ordb.collection.createIndexes()
operation. - The index specified to
dropIndex()
must be the only index associated to the index builder, i.e. the indexes built by a singlecreateIndexes
ordb.collection.createIndexes()
operation.
To drop a specific index out of a set of related in-progress builds, wait until the index builds complete and specify that index to dropIndexes
or its shell helpers.
For more complete documentation, see:
- Stop In-Progress Index Builds for the
dropIndexes
command. - Stop In-Progress Index Builds for the
dropIndexes()
method. - Stop In-Progress Index Builds for the
dropIndex()
method.
drop()
Can Abort In-Progress Index Builds
Starting in MongoDB 4.4, the db.collection.drop()
method and drop
command abort any in-progress index builds on the target collection before dropping the collection. Prior to MongoDB 4.4, attempting to drop a collection with in-progress index builds results in an error, and the collection is not dropped.
For replica sets or shard replica sets, aborting an index on the primary does not simultaneously abort secondary index builds. MongoDB attempts to abort the in-progress builds for the specified indexes on the primary and if successful creates an associated abort
oplog entry. Secondary members with replicated in-progress builds wait for a commit or abort oplog entry from the primary before either committing or aborting the index build.
For replica sets or shard replica sets, aborting an index on the primary does not simultaneously abort secondary index builds. MongoDB attempts to abort the in-progress builds for the specified indexes on the primary and if successful creates an associated abort
oplog entry. Secondary members with replicated in-progress builds wait for a commit or abort oplog entry from the primary before either committing or aborting the index build.
dropDatabase
Can Abort In-Progress Index Builds
Starting in MongoDB 4.4, the db.dropDatabase()
method and dropDatabase
command abort any in-progress index builds on collections in the target database before dropping the database. Aborting an index build has the same effect as dropping the built index. Prior to MongoDB 4.4, attempting to drop a database that contains a collection with an in-progress index build results in an error, and the database is not dropped.
See also:
Deprecation of geoHaystack
Index and geoSearch
Command
MongoDB 4.4 deprecates the geoHaystack index and the geoSearch
command. Use a 2d index with $geoNear
or $geoWithin
instead.
Removed Commands
MongoDB removes the following command(s) and mongo
shell helper(s):
Removed Command | Removed Helper | Alternatives |
---|---|---|
cloneCollection | db.cloneCollection() |
|
planCacheListPlans | PlanCache.getPlansByQuery() |
See also $planCacheStats Changes. |
planCacheListQueryShapes | PlanCache.listQueryShapes() |
See also $planCacheStats Changes. |
Networking
Support for TCP Fast Open
Starting with MongoDB 4.4, mongod
and mongos
support TCP Fast Open (TFO) connections by default. TFO requires both the client and the mongod/mongos
host machines support and enable TFO:
- Windows
-
The following Windows operating systems support TFO:
- Microsoft Windows Server 2016 or later.
- Microsoft Windows 10 Update 1607 or later.
- macOS
- macOS 10.11 (El Capitan) and later support TFO
- Linux
-
Linux operating systems running Linux Kernel 3.7 or later can support inbound TFO connections.
Linux operating systems running Linux Kernel 4.11 or later can support both inbound and outbound TFO connections.
Set the value of
/proc/sys/net/ipv4/tcp_fastopen
to enable support for inbound and/or outbound TFO connections:- Set to
1
to enable only outbound TFO connections - Set to
2
to enable only inbound TFO connections - Set to
3
to enable inbound and outbound TFO connections.
- Set to
MongoDB 4.4 adds the following parameters for controlling TFO:
Parameter | Description |
---|---|
tcpFastOpenServer | Default: true (Enabled)Enables or disables support for inbound TFO connections to the mongod/mongos
|
tcpFastOpenClient | Default: true (Enabled)Linux Operating System Only Enables or disables support for outbound TFO connections from the mongod/mongos .
|
tcpFastOpenQueueSize | Default: 1024 Control the size of the queue of pending TFO connections. |
MongoDB 4.4 adds the following counters to the output of serverStatus
and db.serverStatus()
:
Counter | Description |
---|---|
network.tcpFastOpen.kernelSetting | Linux only Indicates kernel support for TFO. |
network.tcpFastOpen.serverSupported | Indicates operating system support for incoming TFO connections. |
network.tcpFastOpen.clientSupported | Indicates operating system support for outgoing TFO connections. |
network.tcpFastOpen.accepted | Indicates the total number of accepted incoming TFO connections to the mongod / mongos since the mongod/mongos last started. |
A complete discussion of TFO is outside the scope of this documentation. For more information on TFO, start with the following external resources:
General Improvements
Blocking Sort Limit Increased
If MongoDB cannot use an index or indexes to obtain the sort order for a given cursor.sort()
operation, MongoDB must perform a blocking sort on the data. A blocking sort indicates that MongoDB must consume and process all input documents to the sort before returning results. Blocking sorts do not block concurrent operations on the collection or database.
Prior to MongoDB 4.4, MongoDB returned an error if a blocking sort operations required more than 32 megabytes of system memory. Starting in MongoDB 4.4, blocking sort operations increase the limit on system memory to use for the sort operation to 100 megabytes. For blocking sort operations which require more than 100 megabytes of system memory, MongoDB returns an error unless the query specifies cursor.allowDiskUse()
(New in MongoDB 4.4).
For more information on sorting and index use, see Sort and Index Use.
find
Can Use Temporary Files To Support Large Non-Indexed Sorts
MongoDB 4.4 adds a new option allowDiskUse to the find
command. With allowDiskUse: true, the operation can use temporary files on disk when processing a non-indexed ("blocking")
sort operation that exceeds the 100 megabyte memory limit. Prior to MongoDB 4.4, a find
operation with a blocking sort failed if it exceeded the memory limit while processing the sort.
For the db.collection.find()
shell method with cursor.sort()
, MongoDB 4.4 adds the cursor.allowDiskUse()
cursor modifier.
allowDiskUse and cursor.allowDiskUse()
have no effect if MongoDB can satisfy the sort using an index, or if the blocking sort requires less than 100 megabytes of memory.
For instructions on enabling allowDiskUse for queries issued through a MongoDB driver, defer to the documentation for your preferred MongoDB 4.4-compatible driver.
Collection Namespace Limit
Starting in MongoDB 4.4,
- For featureCompatibilityVersion set to
"4.4"
or greater, MongoDB raises the limit for unsharded collections and views to 255 bytes, and to 235 bytes for sharded collections. For a collection or a view, the namespace includes the database name, the dot (.
) separator, and the collection/view name (e.g.<database>.<collection>
), - For featureCompatibilityVersion set to
"4.2"
or earlier, the maximum length of unsharded collections and views namespace remains 120 bytes and 100 bytes for sharded collection.
Validation Data Throughput Information
Starting in version MongoDB 4.4,
- The
$currentOp
and thecurrentOp
command includedataThroughputAverage
anddataThroughputLastSecond
information for validate operations in progress. - The log messages for validate operations include
dataThroughputAverage
anddataThroughputLastSecond
information.
compact
Behavior Change
Blocking
Starting in MongoDB 4.4, compact
only blocks the following metadata operations:
db.collection.drop()
db.collection.createIndex()
anddb.collection.createIndexes()
db.collection.dropIndex()
anddb.collection.dropIndexes()
compact
does not block MongoDB CRUD Operations for the database it is currently operating on.
Previously, compact
blocked all operations for the database it was operating on, including MongoDB CRUD Operations, and was therefore only appropriate for use during scheduled maintenance periods.
force
Option
Starting in MongoDB 4.4, the force
flag forces compact
to run on the primary in a replica set.
Previously, the force
option, when set to true
enabled compact
to run on the primary in a replica set and if set to false
, returned an error when run on a primary.
See also:
mongod --repair
Behavior Change
Starting in MongoDB 4.4, the mongod --repair
rebuilds all indexes for the following:
- Collections with inconsistencies between the collection data and one or more indexes.
- Salvaged and modified collections.
In earlier versions of MongoDB, the mongod --repair
option rebuilt all indexes for all collections.
serverStatus
Output Change
Field Name Change
serverStatus
returns flowControl.locksPerKiloOp
instead of flowControl.locksPerOp
.
New Fields
serverStatus
includes the following new fields in its output:
- Aggregation Metrics
metrics.aggStageCounters
(Also available in 4.2.6+ and 4.0.19+)
- Connections Metrics
- Default Read Concern Write Concern Metrics
defaultRWConcern
defaultRWConcern.defaultReadConcern
defaultRWConcern.defaultReadConcern.level
defaultRWConcern.defaultWriteConcern
defaultRWConcern.defaultWriteConcern.w
defaultRWConcern.defaultWriteConcern.wtimeout
defaultRWConcern.updateOpTime
defaultRWConcern.updateWallClockTime
defaultRWConcern.localUpdateWallClockTime
metrics.getLastError.default
metrics.getLastError.default.unsatisfiable
metrics.getLastError.default.wtimeouts
- Latch Metrics
- Mirrored Reads Metrics
- Query Execution Metrics
- Replication Metrics
metrics.repl.network.replSetUpdatePosition.num
metrics.repl.network.getmores.numEmptyBatches
metrics.repl.network.oplogGetMoresProcessed
metrics.repl.network.oplogGetMoresProcessed.num
metrics.repl.network.oplogGetMoresProcessed.totalMillis
metrics.repl.syncSource.numSelections
metrics.repl.syncSource.numTimesChoseSame
metrics.repl.syncSource.numTimesChoseDifferent
metrics.repl.syncSource.numTimesCouldNotFind
- Network Metrics
- Security Metrics
- Sharding Metrics
serverStatus
Sharding Statistics Output Change
shardingStatistics.numHostsTargeted
reports the number of shards targeted by CRUD operations and aggregation commands. It increments the relevant find, insert, update, delete or aggregate metric with each operation on a cluster.
replSetGetStatus
Output Change
replSetGetStatus
returns the following new fields:
tooStale
votingMembersCount
writableVotingMembersCount
initialSyncStatus.syncSourceUnreachableSince
initialSyncStatus.currentOutageDurationMillis
initialSyncStatus.totalTimeUnreachableMillis
initialSyncAttempts.rollBackId
initialSyncAttempts.operationsRetried
initialSyncAttempts.totalTimeUnreachableMillis
db.auth() Can Prompt for Password
Starting in MongoDB 4.4, the mongo
shell method db.auth(<username>, <password>)
prompts for the password if you do not pass in the password or the passwordPrompt()
method for the <password>
.
Support for $natural
Sort on Views
Starting in MongoDB 4.4, you can specify a $natural
sort when running a find
operation against a view.
Support for Diagnostic Backtrace Generation
Starting in MongoDB 4.4 running on Linux:
- When the
mongod
andmongos
processes receive aSIGUSR2
signal, backtrace details are added to the logs for each process thread. - Backtrace details show the function calls for the process, which can be used for diagnostics and provided to MongoDB Support if required.
The backtrace functionality is available for these architectures:
x86_64
arm64
(starting in MongoDB 4.4.15, 5.0.10, and 6.0)
For more information, see Generate a Backtrace.
Container-aware FTDC Reporting
Starting in MongoDB 4.4, FTDC now reports utilization data for a mongod
running in a container from the perspective of the container, as opposed to the host operating system. See Full Time Diagnostic Data Capture for more information.
Updated ulimit
Startup Warning
Starting in MongoDB 4.4, mongod
logs a startup warning if a platform's configured ulimit
value for number of open files is under 64000
. Previously, a warning would only be logged if this value was under 1000
. See Recommended ulimit
Settings for more information.
New replanReason
Database Profiler Output Field
MongoDB 4.4 adds the replanReason
field to database profiler output and diagnostic log messages. The replanReason
field contains the reason the query system evicted a cached plan.
dbStats
and collStats
Output
The dbStats
command and its mongo
shell helper db.stats()
return:
totalSize
, which is the sum ofstorageSize
andindexSize
.
The collStats
command, its mongo
shell helper db.collection.stats()
, and the $collStats
aggregation stage return:
totalSize
, which is the sum ofstorageSize
andtotalIndexSize
.freeStorageSize
, which is the amount of storage available for reuse.
Hint Available for Additional Database Commands
Starting in MongoDB 4.4, the following database commands can accept a hint
argument to specify the index to use:
- The
delete
command and the associatedmongo
shell methodsdb.collection.deleteOne()
anddb.collection.deleteMany()
. - The
findAndModify
command and the associatedmongo
shell methods:
See:
JavaScript Execution on mongos
Starting in MongoDB 4.4, MongoDB allows JavaScript execution on mongos
instances. To disable JavaScript execution on a mongos
instance:
- Set
security.javascriptEnabled
configuration option to false, or - Specify the
--noscripting
command-line option.
Earlier versions of MongoDB do not allow JavaScript execution on mongos
instances.
Global Default Read and Write Concern
Requires featureCompatibilityVersion 4.4+
Each mongod
in the replica set or sharded cluster must have featureCompatibilityVersion set to at least 4.4
to configure global default read and write concern.
Starting in MongoDB 4.4, replica sets and sharded clusters support configuring global default read and write concern settings. Clients which do not explicitly specify a given read or write concern setting inherit the corresponding global default setting.
To configure default global default read or write concern, MongoDB adds the setDefaultRWConcern
administrative command. For replica sets, issue the command against the primary member. For sharded clusters, issue the command from a mongos
.
To retrieve the global default read or write concern settings, MongoDB adds the getDefaultRWConcern
administrative command.
Read Concern Provenance
Starting in MongoDB 4.4, read concern objects may include a provenance
field, indicating where the read concern originated.
The following table shows the possible read concern provenance
values and their significance:
Provenance | Description |
---|---|
clientSupplied | The read concern was specified in the application. |
customDefault | The read concern originated from a custom defined default value. See setDefaultRWConcern . |
implicitDefault | The read concern originated from the server in absence of all other read concern specifications. |
If a read operation is logged or profiled, the operation entry contains the read concern object, including the provenance
field.
MongoDB does not recommended specifying the provenance
field in requests to the server. This field should only be used for diagnostic purposes.
Write Concern Provenance
Starting in MongoDB 4.4, write concern objects may include a provenance
field, indicating where the write concern originated.
The following table shows the possible write concern provenance
values and their significance:
Provenance | Description |
---|---|
clientSupplied | The write concern was specified in the application. |
customDefault | The write concern originated from a custom defined default value. See setDefaultRWConcern . |
getLastErrorDefaults | The write concern originated from the replica set's settings.getLastErrorDefaults field. |
implicitDefault | The write concern originated from the server in absence of all other write concern specifications. |
If a write operation is logged or profiled, the operation entry contains the write concern object, including the provenance
field.
MongoDB does not recommended specifying the provenance
field in requests to the server. This field should only be used for diagnostic purposes.
currentOp
Output
- The
$currentOp
includesdataThroughputAverage
anddataThroughputLastSecond
information when reporting on validate operations in progress. - The
currentOp
command includesdataThroughputAverage
anddataThroughputLastSecond
information when reporting on validate operations in progress.
New KMIP Connection Parameters for mongod
MongoDB 4.4 Enterprise introduces two new configuration settings to enhance the initial connection to a KMIP server, as part of Kerberos authentication:
Connection Retries
To control the number of times the mongod
retries a failed initial connection to the KMIP server:
- Set the
security.kmip.connectRetries
configuration option, or - Specify the
mongod --kmipConnectRetries
command-line option.
Connection Timeout
To control the timeout, in milliseconds, to wait for the initial response from the KMIP server before giving up, or retrying:
- Set the
security.kmip.connectTimeoutMS
configuration option, or - Specify the
mongod --kmipConnectTimeoutMS
command-line option.
These settings are available in MongoDB Enterprise only.
New Startup Option for mongod
The new processUmask
startup option for mongod
allows you to set permissions through umask for groups and other users when
honorSystemUmask
is set to false
.
mapReduce
Ignores verbose
Option
Starting with MongoDB 4.4, the mapReduce
command and the db.collection.mapReduce()
shell method ignore the verbose option.
explain
Support for mapReduce
Starting with MongoDB 4.4, you can use the explain
command or the db.collection.explain()
shell method to preview the results of mapReduce
or db.collection.mapReduce()
.
Improvements to explain
Results
Starting in version 4.4:
- Explain results for commands run on sharded clusters include a top-level serverInfo object for the
mongos
in addition to theserverInfo
objects returned for each shard. This is also available in versions 4.2.2, 4.0.14, and 3.6.16. - Explain results include the serverInfo object when
optimizedPipeline
istrue
. In previous versions of MongoDB,explain
results would occasionally not include theserverInfo
object whenoptimizedPipeline
wastrue
. This is also available in versions 4.2.2, 4.0.14, and 3.6.16. - Explain results for aggregation include the
nReturned
andexecutionTimeMillisEstimate
fields for each pipeline stage when you rundb.collection.explain().aggregate()
method inexecutionStats
andallPlansExecution
modes.
See also:
comment
Option Available to all Database Commands
Starting in MongoDB 4.4, all database commands support specifying a comment
field, in the following fashion:
Example
db.runCommand( { <command> , "comment" : <any BSON type> })
Once set, the comment appears alongside records of the command in the following locations:
- mongod log messages, in the
attr.command.cursor.comment
field. - Database profiler output, in the
command.comment
field. currentOp
output, in thecommand.comment
field.
A comment must be a valid BSON object (string, integer, array, etc).
Improvements to Positional $
Operator
Starting in MongoDB 4.4, when using the positional $
operator, you can specify different array fields between the query document and projection document.
For example, if you insert the following document into a collection:
db.foo.insertOne( { a: [ "one", "two", "three" ], b: [ 1, 2, 3 ] } )
Starting in MongoDB 4.4, you can use the following query to project only the first element from field b
for a document that matches the query specified on field a
:
db.foo.findOne( { a: "one" }, { "b.$": 1 } )
To ensure expected behavior, the arrays used in the query document and the projection document must be the same length. If the arrays are different lengths, the operation may error in certain scenarios.
In previous versions of MongoDB, this operation fails because the array field being limited must appear in the query document.
Changes Affecting Compatibility
Some changes can affect compatibility and may require user actions. For a detailed list of compatibility changes, see Compatibility Changes in MongoDB 4.4.
Upgrade Procedures
Feature Compatibility Version
To upgrade from 4.2 deployment, the 4.2 deployment must have featureCompatibilityVersion
set to 4.2
. To check the version:
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
For specific details on verifying and setting the featureCompatibilityVersion
as well as information on other prerequisites/considerations for upgrades, refer to the individual upgrade instructions:
If you need guidance on upgrading to 4.4, MongoDB professional services offer major version upgrade support to help ensure a smooth transition without interruption to your MongoDB application.
Downgrade Consideration
MongoDB only supports single-version downgrades. You cannot downgrade to a release that is multiple versions behind your current release.
For example, you may downgrade a 4.4-series to a 4.2-series deployment. However, further downgrading that 4.2-series deployment to a 4.0-series deployment is not supported.
Download
To download MongoDB 4.4, go to the MongoDB Download Center.
See also:
Known Issues
In Version | Issues | Status |
---|---|---|
4.4.0 | SERVER-45042 | Unresolved |
4.4.0 | SERVER-49694nearest reads or hedged reads may not be routed to a near shard replica. | Fixed in 4.4.1 |
4.4.0 | WT-6623 | Fixed in 4.4.1 |
Report an Issue
To report an issue, see https://github.com/mongodb/mongo/wiki/Submit-Bug-Reports for instructions on how to file a JIRA ticket for the MongoDB server or one of the related projects.