Downgrade 4.2 Sharded Cluster to 4.0
On this page本页内容
Before you attempt any downgrade, familiarize yourself with the content of this document.
Downgrade Path
Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.
If you need to downgrade from 4.2, downgrade to the latest patch release of 4.0.
If you downgrade,
- On Windows, downgrade to version 4.0.12 or later version. You cannot downgrade to a 4.0.11 or earlier version.
- On Linux/macOS, if you are running change streams and want to seamlessly resume change streams, downgrade to 4.0.7 or later versions.
Considerations
Starting in MongoDB 4.2, change streams are available regardless of the "majority" read concern support; that is, read concern majority support can be either enabled (default) or disabled to use change streams.
In MongoDB 4.0 and earlier, change streams are available only if "majority" read concern support is enabled (default).
Once you downgrade to 4.0-series, change streams will be disabled if you have disabled read concern "majority".
Create Backup
Optional but Recommended. Create a backup of your database.
Access Control
If your sharded cluster has access control enabled, your downgrade user privileges must include additional privileges to manage indexes on the config database.
db.getSiblingDB("admin").createRole({
role: "configIndexRole",
privileges: [
{
resource: { db: "config", collection: "" },
actions: [ "find", "dropIndex", "createIndex", "listIndexes" ]
}
],
roles: [ ]
});
Add the newly created role to your downgrade user. For example, if you have a user myDowngradeUser in the admin database that already has the root role, use db.grantRolesToUser() to grant the additional role:
db.getSiblingDB("admin").grantRolesToUser( "myDowngradeUser",
[ { role: "configIndexRole", db: "admin" } ],
{ w: "majority", wtimeout: 4000 }
);
Prerequisites
To downgrade from 4.2 to 4.0, you must remove incompatible features that are persisted and/or update incompatible configuration settings. These include:
1. Downgrade Feature Compatibility Version (fCV)
To downgrade the featureCompatibilityVersion of your sharded cluster:
- Connect a
mongoshell to themongosinstance. - Downgrade the
featureCompatibilityVersionto"4.0".db.adminCommand({setFeatureCompatibilityVersion: "4.0"})
The
setFeatureCompatibilityVersioncommand performs writes to an internal system collection and is idempotent. If for any reason the command does not complete successfully, retry the command on themongosinstance. - To ensure that all members of the sharded cluster reflect the updated
featureCompatibilityVersion, connect to each shard replica set member and each config server replica set member and check thefeatureCompatibilityVersion:TipFor a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.
db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
All members should return a result that includes:
"featureCompatibilityVersion" : { "version" : "4.0" }
If any member returns a
featureCompatibilityVersionof"4.2", wait for the member to reflect version"4.0"before proceeding.
Arbiters do not replicate the admin.system.version collection. Because of this, arbiters always have a feature compatibility version equal to the downgrade version of the binary, regardless of the fCV value of the replica set.
For example, an arbiter in a MongoDB 4.2 cluster, has an fCV value of 4.0.
For more information on the returned featureCompatibilityVersion value, see Get FeatureCompatibilityVersion.
2. Remove fCV 4.2 Persisted Features
The following steps are necessary only if fCV has ever been set to "4.2".
Remove all persisted 4.2 features that are incompatible with 4.0. These include:
2a. Index Key Size
Starting in MongoDB 4.2, for featureCompatibilityVersion (fCV)
set to "4.2" or greater, MongoDB removes the Index Key Limit. For fCV set to "4.0", the limit still applies.
If you have an index with keys that exceed the Index Key Limit once fCV is set to "4.0", consider changing the index to a hashed index or to indexing a computed value. You can also temporarily use failIndexKeyTooLong set to false before resolving the problem. However, with failIndexKeyTooLong set to false, queries that use these indexes can return incomplete results.
2b. Index Name Length
Starting in MongoDB 4.2, for featureCompatibilityVersion (fCV)
set to "4.2" or greater, MongoDB removes the Index Name Length. For fCV set to "4.0", the limit still applies.
If you have an index with a name that exceeds the Index Name Length once fCV is set to "4.0", drop and recreate the index with a shorter name.
db.collection.dropIndex( <name | index specification> )
db.collection.createIndex(
{ <index specification> },
{ name: <shorter name> }
}
2c. Unique Index Version
With featureCompatibilityVersion (fCV) "4.2", MongoDB uses a new internal format for unique indexes that is incompatible with MongoDB 4.0. The new internal format applies to both existing unique indexes as well as newly created/rebuilt unique indexes.
If fCV has ever been set to "4.2", use the following script to drop and recreate all unique indexes.
- Script to run on
mongos -
// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0.
// Run this script to drop and recreate unique indexes
// for backwards compatibility with 4.0.
db.adminCommand("listDatabases").databases.forEach(function(d){
let mdb = db.getSiblingDB(d.name);
mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){
let currentCollection = mdb.getCollection(c.name);
currentCollection.getIndexes().forEach(function(idx){
if (idx.unique){
print("Dropping and recreating the following index:" + tojson(idx))
assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name}));
let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] });
if (res.ok !== 1)
assert.commandWorked(res);
}
});
});
}); - Script to run on shards
- After you have run the script on
mongos, you need to check individual shards if you have created shard local users. That is, if you created maintenance users directly on the shards instead of throughmongos, run the script on the primary member of the shard.// A script to rebuild unique indexes after downgrading fcv 4.2 to 4.0.
// Run this script on shards to drop and recreate unique indexes
// for backwards compatibility with 4.0.
let mdb = db.getSiblingDB('admin');
mdb.getCollectionInfos( { type: "collection" } ).forEach(function(c){
let currentCollection = mdb.getCollection(c.name);
currentCollection.getIndexes().forEach(function(idx){
if (idx.unique){
print("Dropping and recreating the following index:" + tojson(idx))
assert.commandWorked(mdb.runCommand({dropIndexes: c.name, index: idx.name}));
let res = mdb.runCommand({ createIndexes: c.name, indexes: [idx] });
if (res.ok !== 1)
assert.commandWorked(res);
}
});
});
2d. Remove user_1_db_1 System Unique Index
In addition, if you have enabled access control, you must also remove the system unique index user_1_db_1 on the admin.system.users collection.
If fCV has ever been set to "4.2", use the following command to drop the user_1_db_1 system unique index:
db.getSiblingDB("admin").getCollection("system.users").dropIndex("user_1_db_1")
The user_1_db_1 index will automatically be rebuilt when starting the server with the 4.0 binary in the procedure below.
2e. Remove Wildcard Indexes
For featureCompatibilityVersion (fCV) set to "4.2", MongoDB supports creating Wildcard Indexes. You must drop all wildcard indexes before downgrading to fCV "4.0".
Use the following script to drop and recreate all wildcard indexes:
// A script to drop wildcard indexes before downgrading fcv 4.2 to 4.0.
// Run this script to drop wildcard indexes
// for backwards compatibility with 4.0.
db.adminCommand("listDatabases").databases.forEach(function(d){
let mdb = db.getSiblingDB(d.name);
mdb.getCollectionInfos({ type: "collection" }).forEach(function(c){
let currentCollection = mdb.getCollection(c.name);
currentCollection.getIndexes().forEach(function(idx){
var key = Object.keys(idx.key);
if (key[0].includes("$**")) {
print("Dropping index: " + idx.name + " from " + idx.ns);
let res = mdb.runCommand({dropIndexes: currentCollection, index: idx.name});
assert.commandWorked(res);
}
});
});
});
Downgrading to fCV "4.0" during an in-progress wildcard index build does not automatically drop or kill the index build. The index build can complete after downgrading to fcv "4.0", resulting in a valid wildcard index on the collection. Starting the 4.0 binary against against that data directory will result in startup failures.
Use db.currentOp() to check for any in-progress wildcard index builds. Once any in-progress wildcard index builds complete, run the script to drop them before downgrading to fCV "4.0".
2f. View Definitions/Collection Validation Definitions that Include 4.2 Operators
Before downgrading the binaries, modify read-only view definitions and collection validation definitions that include the 4.2 operators, such as $set, $unset, $replaceWith.
- For the
$setstage, use the$addFieldsstage instead. - For the
$replaceWithstage, use the$replaceRootstage instead. - For the
$unsetstage, use the$projectstage instead.
You can modify a view either by:
- dropping the view (
db.myview.drop()method) and recreating the view (db.createView()method) or - using the
collModcommand.
You can modify the colleciton validation expressions by:
- using the
collModcommand.
3. Update tls-Prefixed Configuration
Starting in MongoDB 4.2, MongoDB adds "tls"-prefixed options as aliases for the "ssl"-prefixed options.
If your deployments or clients use the "tls"-prefixed options, replace with the corresponding "ssl"-prefixed options for the mongod, the mongos, and the mongo shell and drivers.
4. Prepare Downgrade from zstd Compression
zstd Journal Compression
The zstd compression library is available for journal data compression starting in version 4.2.
For any shard or config server member that uses zstd library for its journal compressor:
- If the member uses
zstdfor journal compression andzstdData Compression, -
- If using a configuration file, delete
storage.wiredTiger.engineConfig.journalCompressorto use the default compressor (snappy) or set to another 4.0 supported compressor. - If using command-line options instead, you will have to update the options in the procedure below.
- If using a configuration file, delete
- If the member only uses
zstdfor journal compression only, - Note
The following procedure involves restarting the replica member as a standalone without the journal.
- Perform a clean shutdown of the
mongodinstance:db.getSiblingDB('admin').shutdownServer()
- Update the configuration file to prepare to restart as a standalone:
- Set
storage.journal.enabledtofalse. - Set parameter
skipShardingConfigurationChecksto true. - Set parameter
disableLogicalSessionCacheRefreshtotruein thesetParametersection. - Comment out the replication settings for your deployment.
- Comment out the
sharding.clusterRolesetting. - Set the
net.portto the member's current port, if it is not explicitly set.
For example:
storage:
journal:
enabled: false
setParameter:
skipShardingConfigurationChecks: true
disableLogicalSessionCacheRefresh: true
#replication:
# replSetName: shardA
#sharding:
# clusterRole: shardsvr
net:
port: 27218If you use command-line options instead of a configuration file, you will have to update the command-line option during the restart.
- Set
- Restart the
mongodinstance:- If you are using a configuration file:
mongod -f <path/to/myconfig.conf>
- If you are using command-line options instead of a configuration file:
- Include the
--nojournaloption. - Set parameter
skipShardingConfigurationChecksto true. - Set parameter
disableLogicalSessionCacheRefreshtotruein the--setParameteroption. - Remove any replication command-line options (such as
--replSet). - Remove
--shardsvr/--configsvroption. - Explicitly include
--portset to the instance's current port.
mongod --nojournal --setParameter skipShardingConfigurationChecks=true --setParameter disableLogicalSessionCacheRefresh=true --port <samePort> ...
- Include the
- If you are using a configuration file:
- Perform a clean shutdown of the
mongodinstance:db.getSiblingDB('admin').shutdownServer()
Confirm that the process is no longer running.
- Update the configuration file to prepare to restart with the new journal compressor:
- Remove the
storage.journal.enabledsetting. - Remove the
skipShardingConfigurationChecksparameter setting. - Remove the
disableLogicalSessionCacheRefreshparameter setting. - Uncomment the replication settings for your deployment.
- Uncomment the
sharding.clusterRolesetting. - Remove
storage.wiredTiger.engineConfig.journalCompressorsetting to use the default journal compressor or specify a new value.
For example:
storage:
wiredTiger:
engineConfig:
journalCompressor: <newValue>
replication:
replSetName: shardA
sharding:
clusterRole: shardsvr
net:
port: 27218If you use command-line options instead of a configuration file, you will have to update the command-line options during the restart below.
- Remove the
- Restart the
mongodinstance as a replica set member:- If you are using a configuration file:
mongod -f <path/to/myconfig.conf>
- If you are using command-line options instead of a configuration file:
- Remove the
--nojournaloption. - Remove the
skipShardingConfigurationChecksparameter setting. - Remove the
disableLogicalSessionCacheRefreshparameter setting. - Remove the
--wiredTigerJournalCompressorcommand-line option to use the default journal compressor or update to a new value. - Include
--shardsvr/--configsvroption. - Include your replication command-line options as well as any additional options for your replica set member.
mongod --shardsvr --wiredTigerJournalCompressor <differentCompressor|none> --replSet ...
- Remove the
- If you are using a configuration file:
- Perform a clean shutdown of the
zstd Data Compression
If you also use zstd Journal Compression, perform these steps after you perform the prerequisite steps for the journal compressor.
The zstd compression library is available starting in version 4.2. For any config server member or shard member that has data stored using zstd compression, the downgrade procedure will require an initial sync for that member. To prepare:
- Create a new empty
data directoryfor themongodinstance. This directory will be used in the downgrade procedure below.ImportantEnsure that the user account running
mongodhas read and write permissions for the new directory. - If you use a configuration file, update the file to prepare for the downgrade procedure:
- Delete
storage.wiredTiger.collectionConfig.blockCompressorto use the default compressor (snappy) or set to another 4.0 supported compressor. - Update
storage.dbPathto the new data directory.
If you use command-line options instead, you will have to update the options in the procedure below. - Delete
Repeat for any other members that used zstd compression.
zstd Network Compression
The zstd compression library is available for network message compression starting in version 4.2.
To prepare for the downgrade:
- For any
mongod/mongosinstance that uses zstd for network message compression and uses a configuration file, update thenet.compression.compressorssetting to prepare for the restart during the downgrade procedure.If you use command-line options instead, you will have to update the options in the procedure below. - For any client that specifies
zstdin itsURI connection string, update to removezstdfrom the list. - For any
mongoshell that specifieszstdin its--networkMessageCompressors, update to removezstdfrom the list.
Messages are compressed when both parties enable network compression. Otherwise, messages between the parties are uncompressed.
5. Remove Client-Side Field Level Encryption Document Validation Keywords
Remove client-side field level encryption code in applications prior to downgrading the server.
MongoDB 4.2 adds support for enforcing client-side field level encryption as part of a collection's Specify JSON Schema Validation document validation. Specifically, the $jsonSchema object supports the encrypt and encryptMetadata keywords. MongoDB 4.0 does not support these keywords and fails to start if any collection specifies those keywords as part of its validation $jsonSchema.
Use db.getCollectionInfos() on each database to identify collections specifying automatic field level encryption rules as part of the $jsonSchema validator. To prepare for downgrade, connect to a cluster mongos and perform either of the following operations for each collection using the 4.0-incompatible keywords:
- Use
collModto modify the collection'svalidatorand replace the$jsonSchemawith a schema that contains only 4.0-compatible document validation syntax:db.runCommand({
"collMod" : "<collection>",
"validator" : {
"$jsonSchema" : { <4.0-compatible schema object> }
}
})-or-
- Use
collModto remove thevalidatorobject entirely:db.runComand({ "collMod" : "<collection>", "validator" : {} })
Procedure
Downgrade a Sharded Cluster
Before proceeding with the downgrade procedure, ensure that all members, including delayed replica set members in the sharded cluster, reflect the prerequisite changes. That is, check the featureCompatibilityVersion and the removal of incompatible features for each node before downgrading.
Download the latest 4.0 binaries.
Using either a package manager or a manual download, get the latest release in the 4.0 series. If using a package manager, add a new repository for the 4.0 binaries, then perform the actual downgrade process.
Before you upgrade or downgrade a replica set, ensure all replica set members are running. If you do not, the upgrade or downgrade will not complete until all members are started.
If you need to downgrade from 4.2, downgrade to the latest patch release of 4.0.
Disable the Balancer.
Connect a mongo shell to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer:
sh.stopBalancer()
If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning() to check the balancer's current state.
To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:
sh.getBalancerState()
Starting in MongoDB 6.1, automatic chunk splitting is not performed. This is because of balancing policy improvements. Auto-splitting commands still exist, but do not perform an operation. For details, see Balancing Policy Changes.
In MongoDB versions earlier than 6.1, sh.stopBalancer() also disables auto-splitting for the sharded cluster.
For more information on disabling the balancer, see Disable the Balancer.
Downgrade the mongos instances.
Downgrade the binaries and restart.
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
- If your
mongoscommand-line options include "tls"-prefixed options, update to "ssl"-prefixed options. - If the
mongosinstance includedzstdnetwork message compression, remove--networkMessageCompressorsto use the defaultsnappy,zlibcompressors. Alternatively, specify the list of compressors to use.
Downgrade each shard, one at a time.
Downgrade the shards one at a time.
- Downgrade the shard's secondary members one at a time:
- Shut down the
mongodinstance.db.adminCommand( { shutdown: 1 } )
- Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
- If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
- If the
mongodinstance usedzstddata compression,- Update
--dbpathto the new directory (created during the prerequisites). - Remove
--wiredTigerCollectionBlockCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Update
- If the
mongodinstance usedzstdjournal compression,- Remove
--wiredTigerJournalCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Remove
- If the
mongodinstance includedzstdnetwork message compression,- Remove
--networkMessageCompressorsto enable message compression using the defaultsnappy,zlibcompressors. Alternatively, explicitly specify the compressor(s).
- Remove
- Wait for the member to recover to
SECONDARYstate before downgrading the next secondary member. To check the member's state, connect amongoshell to the shard and runrs.status()method.Repeat to downgrade for each secondary member.
- Shut down the
- Downgrade the shard arbiter, if any.
Skip this step if the replica set does not include an arbiter.
- Shut down the
mongod. See StopmongodProcesses for additional ways to safely terminatemongodprocesses.db.adminCommand( { shutdown: 1 } )
- Delete the arbiter data directory contents. The
storage.dbPathconfiguration setting or--dbpathcommand line option specify the data directory of the arbitermongod.rm -rf /path/to/mongodb/datafiles/* - Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
- If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
- If the
mongodinstance usedzstddata compression,- Update
--dbpathto the new directory (created during the prerequisites). - Remove
--wiredTigerCollectionBlockCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Update
- If the
mongodinstance usedzstdjournal compression,- Remove
--wiredTigerJournalCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Remove
- If the
mongodinstance includedzstdnetwork message compression,- Remove
--networkMessageCompressorsto enable message compression using the defaultsnappy,zlibcompressors. Alternatively, explicitly specify the compressor(s).
- Remove
- Wait for the member to recover to
ARBITERstate. To check the member's state, connect amongoshell to the member and runrs.status()method.
- Shut down the
- Downgrade the shard's primary.
- Step down the shard's primary. Connect a
mongoshell to the primary and users.stepDown()to step down the primary and force an election of a new primary:rs.stepDown() - When
rs.status()shows that the primary has stepped down and another member has assumedPRIMARYstate, downgrade the stepped-down primary: - Shut down the stepped-down primary.
db.adminCommand( { shutdown: 1 } )
- Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
- If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
- If the
mongodinstance usedzstddata compression,- Update
--dbpathto the new directory (created during the prerequisites). - Remove
--wiredTigerCollectionBlockCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Update
- If the
mongodinstance usedzstdjournal compression,- Remove
--wiredTigerJournalCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Remove
- If the
mongodinstance includedzstdnetwork message compression,- Remove
--networkMessageCompressorsto enable message compression using the defaultsnappy,zlibcompressors. Alternatively, explicitly specify the compressor(s).
- Remove
- Step down the shard's primary. Connect a
Repeat for the remaining shards.
Downgrade the config servers.
- Downgrade the secondary members of the config servers replica set (CSRS) one at a time:
- Shut down the
mongodinstance.db.adminCommand( { shutdown: 1 } )
- Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
- If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
- If the
mongodinstance usedzstddata compression,- Update
--dbpathto the new directory (created during the prerequisites). - Remove
--wiredTigerCollectionBlockCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Update
- If the
mongodinstance usedzstdjournal compression,- Remove
--wiredTigerJournalCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Remove
- If the
mongodinstance includedzstdnetwork message compression,- Remove
--networkMessageCompressorsto enable message compression using the defaultsnappy,zlibcompressors. Alternatively, explicitly specify the compressor(s).
- Remove
- Wait for the member to recover to
SECONDARYstate before downgrading the next secondary member. To check the member's state, connect amongoshell to the shard and runrs.status()method.Repeat to downgrade for each secondary member.
- Shut down the
- Step down the config server primary.
- Connect a
mongoshell to the primary and users.stepDown()to step down the primary and force an election of a new primary:rs.stepDown() - When
rs.status()shows that the primary has stepped down and another member has assumedPRIMARYstate, downgrade the stepped-down primary: - Shut down the stepped-down primary.
db.adminCommand( { shutdown: 1 } )
- Replace the 4.2 binary with the 4.0 binary and restart.
Note
If you use command-line options instead of a configuration file, update the command-line options as appropriate during the restart.
- If your command-line options include "tls"-prefixed options, update to "ssl"-prefixed options.
- If the
mongodinstance usedzstddata compression,- Update
--dbpathto the new directory (created during the prerequisites). - Remove
--wiredTigerCollectionBlockCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Update
- If the
mongodinstance usedzstdjournal compression,- Remove
--wiredTigerJournalCompressorto use the defaultsnappycompressor (or, alternatively, explicitly set to a 4.0 supported compressor).
- Remove
- If the
mongodinstance includedzstdnetwork message compression,- Remove
--networkMessageCompressorsto enable message compression using the defaultsnappy,zlibcompressors. Alternatively, explicitly specify the compressor(s).
- Remove
- Connect a
Re-enable the balancer.
Once the downgrade of sharded cluster components is complete, connect to the mongos and restart the balancer.
sh.startBalancer();
To verify that the balancer is enabled, run sh.getBalancerState():
sh.getBalancerState()
If the balancer is enabled, the method returns true.
Re-enable autosplit.
When stopping the balancer as part of the downgrade process, the sh.stopBalancer() method also disabled auto-splitting.
Once downgraded to MongoDB 4.0, sh.startBalancer() does not re-enable auto-splitting. If you wish to re-enable auto-splitting, run sh.enableAutoSplit():
sh.enableAutoSplit()