The architecture of a replica set affects the set's capacity and capability. This document provides strategies for replica set deployments and describes common architectures.副本集的架构会影响该集的容量和能力。本文档提供了副本集部署的策略,并描述了常见的体系结构。
The standard replica set deployment for a production system is a three-member replica set. These sets provide redundancy and fault tolerance. Avoid complexity when possible, but let your application requirements dictate the architecture.生产系统的标准副本集部署是一个由三个成员组成的副本集。这些设置提供冗余和容错。尽可能避免复杂性,但让应用程序需求决定架构。
Note
Outside of a rolling upgrade, all 除了滚动升级之外,副本集的所有mongod members of a replica set should use the same major version of MongoDB.mongod成员都应该使用相同的MongoDB主版本。
Strategies策略
Determine the Number of Members确定成员数量
Add members in a replica set according to these strategies.根据这些策略在副本集中添加成员。
Maximum Number of Voting Members投票成员的最大数量
A replica set can have up to 50 members, but only 7 voting members. If the replica set already has 7 voting members, additional members must be non-voting members.
Deploy an Odd Number of Members
Ensure that the replica set has an odd number of voting members. A replica set can have up to 7 voting members. If you have an even number of voting members, deploy another data bearing voting member or, if constraints prohibit against another data bearing voting member, an arbiter.
An arbiter does not store a copy of the data and requires fewer resources. As a result, you may run an arbiter on an application server or other shared resource. With no copy of the data, it may be possible to place an arbiter into environments that you would not place other members of the replica set. Consult your security policies.
Warning
Avoid deploying more than one arbiter in a replica set. See Concerns with Multiple Arbiters.
To add an arbiter to an existing replica set:
- Typically, if there are two or fewer data-bearing members in the replica set, you might need to first set the cluster wide write concern for the replica set.
- See cluster wide write concern for more information on why you might need to set the cluster wide write concern.
You do not need to change the cluster wide write concern before starting a new replica set with an arbiter.
Consider Fault Tolerance
Fault tolerance for a replica set is the number of members that can become unavailable and still leave enough members in the set to elect a primary. In other words, it is the difference between the number of members in the set and the majority of voting members needed to elect a primary. Without a primary, a replica set cannot accept write operations. Fault tolerance is an effect of replica set size, but the relationship is not direct. See the following table:
| Number of Members | Majority Required to Elect a New Primary | Fault Tolerance |
|---|---|---|
| 3 | 2 | 1 |
| 4 | 3 | 1 |
| 5 | 3 | 2 |
| 6 | 4 | 2 |
Adding a member to the replica set does not always increase the fault tolerance. However, in these cases, additional members can provide support for dedicated functions, such as backups or reporting.
rs.status() returns majorityVoteCount for the replica set.
Use Hidden and Delayed Members for Dedicated Functions
Add hidden or delayed members to support dedicated functions, such as backup or reporting.
Read-Heavy Applications
A replica set is designed for high availability and redundancy. In most cases secondary members operate under similar loads as the primary. You should not direct reads to secondaries.
If you have a read-heavy application, consider using Mongosync to replicate data to another cluster for reading.
For more information on secondary read modes, see: secondary and secondaryPreferred.
Add Capacity Ahead of Demand
The existing members of a replica set must have spare capacity to support adding a new member. Always add new members before the current demand saturates the capacity of the set.
Distribute Members Geographically
To protect your data in case of a data center failure, keep at least one member in an alternate data center. If possible, use an odd number of data centers, and choose a distribution of members that maximizes the likelihood that even with a loss of a data center, the remaining replica set members can form a majority or at minimum, provide a copy of your data.
Note
For production deployments, we recommend deplying config server and shard replica sets on at least three data centers. This configuration provides high availability in case a single data center goes down.
To ensure that the members in your main data center be elected primary before the members in the alternate data center, set the members[n].priority of the members in the alternate data center to be lower than that of the members in the primary data center.
For more information, see Replica Sets Distributed Across Two or More Data Centers有关详细信息,请参阅分布在两个或多个数据中心的副本集
Target Operations with Tag Sets使用标签集进行目标操作
Use replica set tag sets to target read operations to specific members or to customize write concern to request acknowledgment from specific members.使用副本集标记集将读取操作定位到特定成员,或自定义写入关注以请求特定成员的确认。
Use Journaling to Protect Against Power Failures使用日志来防止电源故障
MongoDB enables journaling by default. Journaling protects against data loss in the event of service interruptions, such as power failures and unexpected reboots.MongoDB默认启用日志记录。日志记录可防止在服务中断(如电源故障和意外重启)时丢失数据。
Hostnames主机名
Important
To avoid configuration updates due to IP address changes, use DNS hostnames instead of IP addresses. It is particularly important to use a DNS hostname instead of an IP address when configuring replica set members or sharded cluster members.为避免因IP地址更改而进行配置更新,请使用DNS主机名而不是IP地址。在配置副本集成员或分片集群成员时,使用DNS主机名而不是IP地址尤为重要。
Use hostnames instead of IP addresses to configure clusters across a split network horizon. Starting in MongoDB 5.0, nodes that are only configured with an IP address fail startup validation and do not start.使用主机名而不是IP地址来配置跨拆分网络范围的集群。从MongoDB 5.0开始,仅配置了IP地址的节点无法启动验证,也无法启动。
Replica Set Naming副本集命名
If your application connects to more than one replica set, each set must have a distinct name. Some drivers group replica set connections by replica set name.如果应用程序连接到多个副本集,则每个副本集必须具有不同的名称。某些驱动程序按副本集名称对副本集连接进行分组。
Deployment Patterns部署模式
The following documents describe common replica set deployment patterns. Other patterns are possible and effective depending on the application's requirements. If needed, combine features of each architecture in your own deployment:以下文档描述了常见的副本集部署模式。根据应用程序的要求,其他模式也是可能且有效的。如果需要,在您自己的部署中结合每种架构的功能:
Three Member Replica Sets三成员副本集Three-member replica sets provide the minimum recommended architecture for a replica set.三成员副本集为副本集提供了最低的推荐体系结构。Replica Sets Distributed Across Two or More Data Centers分布在两个或多个数据中心的副本集Geographically distributed sets include members in multiple locations to protect against facility-specific failures, such as power outages.地理分布的集合包括位于多个位置的成员,以防止设施特定的故障,如停电。