On this page本页内容
glibc
package on Ubuntu 16.04 for POWER, you must upgrade the glibc
package to at least glibc 2.23-0ubuntu5
before running MongoDB. Systems with older versions of the glibc
package will experience database server crashes and misbehavior due to random memory corruption, and are unsuitable for production deployments of MongoDBBefore you attempt any upgrade, please familiarize yourself with the content of this document.
If you need guidance on upgrading to 3.4, MongoDB offers major version upgrade services to help ensure a smooth transition without interruption to your MongoDB application.
When upgrading, consider the following:
To upgrade an existing MongoDB deployment to 3.4, you must be running a 3.2-series release.
To upgrade from a version earlier than the 3.2-series, you must successively upgrade major releases until you have upgraded to 3.2-series. For example, if you are running a 3.0-series, you must 3.2 before you can upgrade to 3.4.
Before you upgrade MongoDB, check that you're using a MongoDB 3.4-compatible driver. Consult the driver documentation for your specific driver to verify compatibility with MongoDB 3.4.
Upgraded deployments that run on incompatible drivers might encounter unexpected or undefined behavior.
Before beginning your upgrade, see the Compatibility Changes in MongoDB 3.4 document to ensure that your applications and deployments are compatible with MongoDB 3.4. Resolve the incompatibilities in your deployment before starting the upgrade.
Before upgrading MongoDB, always test your application in a staging environment before deploying the upgrade to your production environment.
Once upgraded to 3.4, you cannot downgrade to a 3.2.7 or earlier version. You can only downgrade to a 3.2.8 or later version.
Avoid reconfiguring replica sets that contain members of different MongoDB versions as validation rules may differ across MongoDB versions.
mongos
and Earlier Versions of mongod
InstancesVersion 3.4 mongos
instances cannot connect to earlier versions of mongod
instances.
The 3.2 and earlier mongo
shell is not compatible with 3.4 clusters.
Starting in 3.4, the use of the deprecated mirrored mongod
instances as config servers (SCCC) is no longer supported.
Before you can upgrade your sharded clusters to 3.4, you must convert your config servers from SCCC to a replica set (CSRS). To convert your config servers from SCCC to CSRS, see Upgrade Config Servers to Replica Set.
config
Databaseconfig
database before upgrading the sharded cluster.If you installed MongoDB from the MongoDB apt
, yum
, dnf
, or zypper
repositories, you should upgrade to 3.4 using your package manager.
Follow the appropriate 3.4 installation instructions for your Linux system. This will involve adding a repository for the new release, then performing the actual upgrade process.
If you have not installed MongoDB using a package manager, you can manually download the MongoDB binaries from the MongoDB Download Center.
See 3.4 installation instructions for more information.
Disable the balancer as described in Disable the Balancer.
If the config servers are replica sets:
Upgrade the secondary members of the replica set one at a time:
mongod
instance and replace the 3.2 binary with the 3.4 binary.Start the 3.4 binary with both the --configsvr
and --port
options:
mongod --configsvr --port <port> --dbpath <path>
If using a configuration file, update the file to specify sharding.clusterRole: configsvr
and net.port
and start the 3.4 binary:
sharding: clusterRole: configsvr net: port: <port> storage: dbpath: <path>
Include any other configuration as appropriate for your deployment.
Wait for the member to recover to SECONDARY
state before upgrading the next secondary member. To check the member's state, issue rs.status()
in the mongo
shell.
Repeat for each secondary member.
Step down the replica set primary.
Connect a mongo
shell to the primary and use rs.stepDown()
to step down the primary and force an election of a new primary:
rs.stepDown()
rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, shut down the stepped-down primary and replace the mongod
binary with the 3.4 binary.Start the 3.4 binary with both the --configsvr
and --port
options:
mongod --configsvr --port <port> --dbpath <path>
If using a configuration file, update the file to specify sharding.clusterRole: configsvr
and net.port
and start the 3.4 binary:
sharding: clusterRole: configsvr net: port: <port> storage: dbpath: <path>
Include any other configuration as appropriate for your deployment.
Upgrade the shards one at a time. If the shards are replica sets, for each shard:
Upgrade the secondary members of the replica set one at a time:
mongod
instance and replace the 3.2
binary with the 3.4 binary.Start the 3.4 binary with the --shardsvr
and --port
command line options.
mongod --shardsvr --port <port> --dbpath <path>
Of if using a configuration file, update the file to include sharding.clusterRole: shardsvr
and net.port
and start:
sharding: clusterRole: shardsvr net: port: <port> storage: dbpath: <path>
Include any other configuration as appropriate for your deployment.
Wait for the member to recover to SECONDARY
state before upgrading the next secondary member. To check the member's state, you can issue rs.status()
in the mongo
shell.
Repeat for each secondary member.
Step down the replica set primary.
Connect a mongo
shell to the primary and use rs.stepDown()
to step down the primary and force an election of a new primary:
rs.stepDown()
When rs.status()
shows that the primary has stepped down and another member has assumed PRIMARY
state, upgrade the stepped-down primary:
mongod
binary with the 3.4 binary.Start the 3.4 binary with the --shardsvr
and --port
command line options.
mongod --shardsvr --port <port> --dbpath <path>
Of if using a configuration file, update the file to specify sharding.clusterRole: shardsvr
and net.port
and start the 3.4 binary:
sharding: clusterRole: shardsvr net: port: <port> storage: dbpath: <path>
Include any other configuration as appropriate for your deployment.
mongos
instances.Replace each mongos
instance with the 3.4 binary and restart. Include any other configuration as appropriate for your deployment.
mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3>
Using a 3.4 mongo
shell, re-enable the balancer as described in Enable the Balancer.
The 3.2 and earlier mongo
shell is not compatible with 3.4 clusters.
At this point, you can run the 3.4 binaries without the 3.4 features that are incompatible with 3.2.
To enable these 3.4 features, set the feature compatibility version to 3.4.
Enabling these backwards-incompatible features can complicate the downgrade process. For details, see Remove 3.4 Incompatible Features.
It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.
On a mongos
instance, run the setFeatureCompatibilityVersion
command in the admin
database:
db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )
This command must perform writes to an internal system collection. If for any reason the command does not complete successfully, you can safely retry the command on the mongos
as the operation is idempotent.