On this page本页内容
Before you attempt any downgrade, familiarize yourself with the content of this document.
Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.
Once upgraded to 4.2, if you need to downgrade, we recommend downgrading to the latest patch release of 4.0.
If you downgrade,
Starting in MongoDB 4.2, change streams are available regardless of the "majority"
read concern support; that is, read concern majority
support can be either enabled (default) or disabled to use change streams.
In MongoDB 4.0 and earlier, change streams are available only if "majority"
read concern support is enabled (default).
Once you downgrade to 4.0-series, change streams will be disabled if you have disabled read concern "majority"
.
Optional but Recommended. Create a backup of your database.
If your sharded cluster has access control enabled, your downgrade user privileges must include additional privileges to manage indexes on the config
database.
db.getSiblingDB("admin").createRole({ role: "configIndexRole", privileges: [ { resource: { db: "config", collection: "" }, actions: [ "find", "dropIndex", "createIndex", "listIndexes" ] } ], roles: [ ] });
Add the newly created role to your downgrade user. For example, if you have a user myDowngradeUser
in the admin
database that already has the root
role, use db.grantRolesToUser()
to grant the additional role:
db.getSiblingDB("admin").grantRolesToUser( "myDowngradeUser", [ { role: "configIndexRole", db: "admin" } ], { w: "majority", wtimeout: 4000 } );
To downgrade from 4.2 to 4.0, you must remove incompatible features that are persisted and/or update incompatible configuration settings. These include:
To downgrade the featureCompatibilityVersion
of your sharded cluster:
mongo
shell to the mongos
instance.featureCompatibilityVersion
to "4.0"
.db.adminCommand({setFeatureCompatibilityVersion: "4.0"})The
setFeatureCompatibilityVersion
command performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on the mongos
instance.To ensure that all members of the sharded cluster reflect the updated featureCompatibilityVersion
, connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion
:
For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
All members should return a result that includes:
"featureCompatibilityVersion" : { "version" : "4.0" }
If any member returns a featureCompatibilityVersion
of "4.2"
, wait for the member to reflect version "4.0"
before proceeding.
For more information on the returned featureCompatibilityVersion
value, see View FeatureCompatibilityVersion.
The following steps are necessary only if fCV has ever been set to "4.2"
.
Remove all persisted 4.2 features that are incompatible with 4.0. These include:
Starting in MongoDB 4.2, for featureCompatibilityVersion
(fCV)
set to "4.2"
or greater, MongoDB removes the Index Key Limit. For fCV set to "4.0"
, the limit still applies.
If you have an index with keys that exceed the Index Key Limit once fCV is set to "4.0"
, consider changing the index to a hashed index or to indexing a computed value. You can also temporarily use failIndexKeyTooLong
set to false
before resolving the problem. However, with failIndexKeyTooLong
set to false
, queries that use these indexes can return incomplete results.
Starting in MongoDB 4.2, for featureCompatibilityVersion
(fCV)
set to "4.2"
or greater, MongoDB removes the Index Name Length. For fCV set to "4.0"
, the limit still applies.
If you have an index with a name that exceeds the Index Name Length once fCV is set to "4.0"
, drop and recreate the index with a shorter name.
db.collection.dropIndex( <name | index specification> ) db.collection.createIndex( { <index specification> }, { name: <shorter name> } }
With featureCompatibilityVersion
(fCV) "4.2"
, MongoDB uses a new internal format for unique indexes that is incompatible with MongoDB 4.0. The new internal format applies to both existing unique indexes as well as newly created/rebuilt unique indexes.
If fCV has ever been set to "4.2"
, use the following script to drop and recreate all unique indexes.
mongos
// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0. // Run this script to drop and recreate unique indexes // for backwards compatibility with 4.0. db.adminCommand("listDatabases").databases.forEach(function(d){ let mdb = db.getSiblingDB(d.name); mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){ let currentCollection = mdb.getCollection(c.name); currentCollection.getIndexes().forEach(function(idx){ if (idx.unique){ print("Dropping and recreating the following index:" + tojson(idx)) assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name})); let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] }); if (res.ok !== 1) assert.commandWorked(res); } }); }); });
mongos
, you need to check individual shards if you have created shard local users. That is, if you created maintenance users directly on the shards instead of through mongos
, run the script on the primary member of the shard.// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0. // Run this script on shards to drop and recreate unique indexes // for backwards compatibility with 4.0. let mdb = db.getSiblingDB('admin'); mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){ let currentCollection = mdb.getCollection(c.name); currentCollection.getIndexes().forEach(function(idx){ if (idx.unique){ print("Dropping and recreating the following index:" + tojson(idx)) assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name})); let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] }); if (res.ok !== 1) assert.commandWorked(res); } }); });
user_1_db_1
System Unique IndexIn addition, if you have enabled access control, you must also remove the system unique index user_1_db_1
on the admin.system.users
collection.
If fCV has ever been set to "4.2"
, use the following command to drop the user_1_db_1
system unique index:
db.getSiblingDB("admin").getCollection("system.users").dropIndex("user_1_db_1")
The user_1_db_1
index will automatically be rebuilt when starting the server with the 4.0 binary in the procedure below.
For featureCompatibilityVersion
(fCV) set to "4.2"
, MongoDB supports creating Wildcard Indexes. You must drop all wildcard indexes before downgrading to fCV "4.0"
.
Use the following script to drop and recreate all wildcard indexes:
// A script to drop wildcard indexes before downgrading fcv 4.2 to 4.0. // Run this script to drop wildcard indexes // for backwards compatibility with 4.0. db.adminCommand("listDatabases").databases.forEach(function(d){ let mdb = db.getSiblingDB(d.name); mdb.getCollectionInfos({ type: "collection" }).forEach(function(c){ let currentCollection = mdb.getCollection(c.name); currentCollection.getIndexes().forEach(function(idx){ var key = Object.keys(idx.key); if (key[0].includes("$**")) { print("Dropping index: " + idx.name + " from " + idx.ns); let res = mdb.runCommand({dropIndexes: currentCollection, index: idx.name}); assert.commandWorked(res); } }); }); });
Downgrading to fCV "4.0"
during an in-progress wildcard index build does not automatically drop or kill the index build. The index build can complete after downgrading to fcv "4.0"
, resulting in a valid wildcard index on the collection. Starting the 4.0 binary against against that data directory will result in startup failures.
Use db.currentOp()
to check for any in-progress wildcard index builds. Once any in-progress wildcard index builds complete, run the script to drop them before downgrading to fCV "4.0"
.
Before downgrading the binaries, modify read-only view definitions and collection validation definitions that include the 4.2 operators, such as $set
, $unset
, $replaceWith
.
$set
stage, use the $addFields
stage instead.$replaceWith
stage, use the $replaceRoot
stage instead.$unset
stage, use the $project
stage instead.You can modify a view either by:
db.myview.drop()
method) and recreating the view (db.createView()
method) orcollMod
command.You can modify the colleciton validation expressions by:
collMod
command.tls
-Prefixed ConfigurationStarting in MongoDB 4.2, MongoDB adds "tls"
-prefixed options as aliases for the "ssl"-prefixed
options.
If your deployments or clients use the "tls"
-prefixed options, replace with the corresponding "ssl"-prefixed
options for the mongod, the mongos, and the mongo shell and drivers.
zstd
Compressionzstd
Journal CompressionThe zstd compression library is available for journal data compression starting in version 4.2.
For any shard or config server member that uses zstd library for its journal compressor:
zstd
for journal compression and zstd
Data Compression,storage.wiredTiger.engineConfig.journalCompressor
to use the default compressor (snappy
) or set to another 4.0 supported compressor.zstd
for journal compression only,The following procedure involves restarting the replica member as a standalone without the journal.
Perform a clean shutdown of the mongod
instance:
db.getSiblingDB('admin').shutdownServer()
Update the configuration file to prepare to restart as a standalone:
storage.journal.enabled
to false
.skipShardingConfigurationChecks
to true.disableLogicalSessionCacheRefresh
to true
in the setParameter
section.sharding.clusterRole
setting.net.port
to the member's current port, if it is not explicitly set.For example:
storage: journal: enabled: false setParameter: skipShardingConfigurationChecks: true disableLogicalSessionCacheRefresh: true #replication: # replSetName: shardA #sharding: # clusterRole: shardsvr net: port: 27218
If you use command-line options instead of a configuration file, you will have to update the command-line option during the restart.
Restart the mongod
instance:
If you are using a configuration file:
mongod -f <path/to/myconfig.conf>
If you are using command-line options instead of a configuration file:
--nojournal
option.skipShardingConfigurationChecks
to true.disableLogicalSessionCacheRefresh
to true
in the --setParameter
option.--replSet
).--shardsvr
/--configsvr
option.--port
set to the instance's current port.mongod --nojournal --setParameter skipShardingConfigurationChecks=true --setParameter disableLogicalSessionCacheRefresh=true --port <samePort> ...
Perform a clean shutdown of the mongod
instance:
db.getSiblingDB('admin').shutdownServer()
Confirm that the process is no longer running.
Update the configuration file to prepare to restart with the new journal compressor:
storage.journal.enabled
setting.skipShardingConfigurationChecks
parameter setting.disableLogicalSessionCacheRefresh
parameter setting.sharding.clusterRole
setting.storage.wiredTiger.engineConfig.journalCompressor
setting to use the default journal compressor or specify a new value.For example:
storage: wiredTiger: engineConfig: journalCompressor: <newValue> replication: replSetName: shardA sharding: clusterRole: shardsvr net: port: 27218
If you use command-line options instead of a configuration file, you will have to update the command-line options during the restart below.
Restart the mongod
instance as a replica set member:
If you are using a configuration file:
mongod -f <path/to/myconfig.conf>
If you are using command-line options instead of a configuration file:
--nojournal
option.skipShardingConfigurationChecks
parameter setting.disableLogicalSessionCacheRefresh
parameter setting.--wiredTigerJournalCompressor
command-line option to use the default journal compressor or update to a new value.--shardsvr
/--configsvr
option.mongod --shardsvr --wiredTigerJournalCompressor <differentCompressor|none> --replSet ...
zstd
Data CompressionIf you also use zstd
Journal Compression, perform these steps after you perform the prerequisite steps for the journal compressor.
The zstd compression library is available starting in version 4.2. For any config server member or shard member that has data stored using zstd compression, the downgrade procedure will require an initial sync for that member. To prepare:
Create a new empty data directory
for the mongod
instance. This directory will be used in the downgrade procedure below.
Ensure that the user account running mongod
has read and write permissions for the new directory.
If you use a configuration file, update the file to prepare for the downgrade procedure:
storage.wiredTiger.collectionConfig.blockCompressor
to use the default compressor (snappy
) or set to another 4.0 supported compressor.storage.dbPath
to the new data directory.Repeat for any other members that used zstd compression.
zstd
Network CompressionThe zstd compression library is available for network message compression starting in version 4.2.
To prepare for the downgrade:
For any mongod
/mongos
instance that uses zstd for network message compression and uses a configuration file, update the net.compression.compressors
setting to prepare for the restart during the downgrade procedure.
zstd
in its URI connection string
, update to remove zstd
from the list.mongo
shell that specifies zstd
in its --networkMessageCompressors
, update to remove zstd
from the list.Messages are compressed when both parties enable network compression. Otherwise, messages between the parties are uncompressed.
Remove client-side field level encryption code in applications prior to downgrading the server.
MongoDB 4.2 adds support for enforcing client-side field level encryption as part of a collection's JSON Schema document validation. Specifically, the $jsonSchema
object supports the encrypt
and encryptMetadata
keywords. MongoDB 4.0 does not support these keywords and fails to start if any collection specifies those keywords as part of its validation $jsonSchema
.
Use db.getCollectionInfos()
on each database to identify collections specifying automatic field level encryption rules as part of the $jsonSchema
validator. To prepare for downgrade, connect to a cluster mongos
and perform either of the following operations for each collection using the 4.0-incompatible keywords:
Use collMod
to modify the collection's validator
and replace the $jsonSchema
with a schema that contains only 4.0-compatible document validation syntax:
db.runCommand({ "collMod" : "<collection>", "validator" : { "$jsonSchema" : { <4.0-compatible schema object> } } })
-or-
Use collMod
to remove the validator
object entirely:
db.runComand({ "collMod" : "<collection>", "validator" : {} })
Before proceeding with the downgrade procedure, ensure that all members, including delayed replica set members in the sharded cluster, reflect the prerequisite changes. That is, check the featureCompatibilityVersion
and the removal of incompatible features for each node before downgrading.
Using either a package manager or a manual download, get the latest release in the 4.0 series. If using a package manager, add a new repository for the 4.0 binaries, then perform the actual downgrade process.
Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.
Once upgraded to 4.2, if you need to downgrade, we recommend downgrading to the latest patch release of 4.0.
Connect a mongo
shell to a mongos
instance in the sharded cluster, and run sh.stopBalancer()
to disable the balancer:
sh.stopBalancer()
If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning()
to check the balancer's current state.
To verify that the balancer is disabled, run sh.getBalancerState()
, which returns false if the balancer is disabled:
sh.getBalancerState()
Starting in MongoDB 4.2, sh.stopBalancer()
also disables auto-splitting for the sharded cluster.
For more information on disabling the balancer, see Disable the Balancer.
mongos
instances.Downgrade the binaries and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
mongos
command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.mongos
instance included zstd
network message compression, remove --networkMessageCompressors
to use the default snappy,zlib
compressors. Alternatively, specify the list of compressors to use.Downgrade the shards one at a time.
Downgrade the shard's secondary members one at a time:
Shut down the mongod
instance.
db.adminCommand( { shutdown: 1 } )
Replace the 4.2 binary with the 4.0 binary and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If the mongod
instance used zstd
data compression,
--dbpath
to the new directory (created during the prerequisites).--wiredTigerCollectionBlockCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance used zstd
journal compression,
--wiredTigerJournalCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance included zstd
network message compression,
--networkMessageCompressors
to enable message compression using the default snappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).Wait for the member to recover to SECONDARY
state before downgrading the next secondary member. To check the member's state, connect a mongo
shell to the shard and run rs.status()
method.
Repeat to downgrade for each secondary member.
Downgrade the shard arbiter, if any.
Skip this step if the replica set does not include an arbiter.
Shut down the mongod
. See Stop mongod
Processes for additional ways to safely terminate mongod
processes.
db.adminCommand( { shutdown: 1 } )
Delete the arbiter data directory contents. The storage.dbPath
configuration setting or --dbpath
command line option specify the data directory of the arbiter mongod
.
rm -rf /path/to/mongodb/datafiles/*
Replace the 4.2 binary with the 4.0 binary and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If the mongod
instance used zstd
data compression,
--dbpath
to the new directory (created during the prerequisites).--wiredTigerCollectionBlockCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance used zstd
journal compression,
--wiredTigerJournalCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance included zstd
network message compression,
--networkMessageCompressors
to enable message compression using the default snappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).ARBITER
state. To check the member's state, connect a mongo
shell to the member and run rs.status()
method.Downgrade the shard's primary.
Step down the shard's primary. Connect a mongo
shell to the primary and use rs.stepDown()
to step down the primary and force an election of a new primary:
rs.stepDown()
rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, downgrade the stepped-down primary:Shut down the stepped-down primary.
db.adminCommand( { shutdown: 1 } )
Replace the 4.2 binary with the 4.0 binary and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If the mongod
instance used zstd
data compression,
--dbpath
to the new directory (created during the prerequisites).--wiredTigerCollectionBlockCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance used zstd
journal compression,
--wiredTigerJournalCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance included zstd
network message compression,
--networkMessageCompressors
to enable message compression using the default snappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).Repeat for the remaining shards.
Downgrade the secondary members of the config servers replica set (CSRS) one at a time:
Shut down the mongod
instance.
db.adminCommand( { shutdown: 1 } )
Replace the 4.2 binary with the 4.0 binary and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If the mongod
instance used zstd
data compression,
--dbpath
to the new directory (created during the prerequisites).--wiredTigerCollectionBlockCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance used zstd
journal compression,
--wiredTigerJournalCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance included zstd
network message compression,
--networkMessageCompressors
to enable message compression using the default snappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).Wait for the member to recover to SECONDARY
state before downgrading the next secondary member. To check the member's state, connect a mongo
shell to the shard and run rs.status()
method.
Repeat to downgrade for each secondary member.
Step down the config server primary.
Connect a mongo
shell to the primary and use rs.stepDown()
to step down the primary and force an election of a new primary:
rs.stepDown()
rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, downgrade the stepped-down primary:Shut down the stepped-down primary.
db.adminCommand( { shutdown: 1 } )
Replace the 4.2 binary with the 4.0 binary and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
If the mongod
instance used zstd
data compression,
--dbpath
to the new directory (created during the prerequisites).--wiredTigerCollectionBlockCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance used zstd
journal compression,
--wiredTigerJournalCompressor
to use the default snappy
compressor (or, alternatively, explicitly set to a 4.0 supported compressor).If the mongod
instance included zstd
network message compression,
--networkMessageCompressors
to enable message compression using the default snappy,zlib
compressors. Alternatively, explicitly specify the compressor(s).Once the downgrade of sharded cluster components is complete, connect to the mongos
and restart the balancer.
sh.startBalancer();
To verify that the balancer is enabled, run sh.getBalancerState()
:
sh.getBalancerState()
If the balancer is enabled, the method returns true.
When stopping the balancer as part of the downgrade process, the sh.stopBalancer()
method also disabled auto-splitting.
Once downgraded to MongoDB 4.0, sh.startBalancer()
does not re-enable auto-splitting. If you wish to re-enable auto-splitting, run sh.enableAutoSplit()
:
sh.enableAutoSplit()