Database Manual / CRUD Operations / CRUD Concepts

Read Isolation, Consistency, and Recency读取隔离性、一致性和新近性

Isolation Guarantees隔离保证

Read Uncommitted未提交读

Depending on the read concern, clients can see the results of writes before the writes are durable:根据读取关注,客户端可以在写入持久之前看到写入结果:

  • Regardless of a write's write concern, other clients using "local" or "available" read concern can see the result of a write operation before the write operation is acknowledged to the issuing client.无论写入的写入关注如何,使用"local""available"读取关注的其他客户端都可以在向发出客户端确认写入操作之前看到写入操作的结果。
  • Clients using "local" or "available" read concern can read data which may be subsequently rolled back during replica set failovers.使用"local""available"读取关注的客户端可以读取数据,这些数据随后可能会在副本集故障转移期间回滚。

For operations in a multi-document transaction, when a transaction commits, all data changes made in the transaction are saved and visible outside the transaction. That is, a transaction will not commit some of its changes while rolling back others.对于多文档事务中的操作,当事务提交时,事务中所做的所有数据更改都会被保存并在事务外部可见。也就是说,一个事务不会在回滚其他更改的同时提交其中的一些更改。

Until a transaction commits, the data changes made in the transaction are not visible outside the transaction.在事务提交之前,事务中所做的数据更改在事务外部不可见。

However, when a transaction writes to multiple shards, not all outside read operations need to wait for the result of the committed transaction to be visible across the shards. 然而,当一个事务写入多个分片时,并非所有外部读取操作都需要等待提交事务的结果在分片之间可见。For example, if a transaction is committed and write 1 is visible on shard A but write 2 is not yet visible on shard B, an outside read at read concern "local" can read the results of write 1 without seeing write 2.例如,如果一个事务已提交,并且写1在分片a上可见,但写2在分片B上尚不可见,则外部读取关注"local"可以读取写1的结果,而看不到写2。

Read uncommitted is the default isolation level and applies to mongod standalone instances as well as to replica sets and sharded clusters.读取未提交是默认的隔离级别,适用于mongod独立实例以及副本集和分片集群。

Read Uncommitted And Single Document Atomicity读取未提交和单文档原子性

Write operations are atomic with respect to a single document; i.e. if a write is updating multiple fields in the document, a read operation will never see the document with only some of the fields updated. 对于单个文档,写入操作是原子性的;即,如果写入操作正在更新文档中的多个字段,则读取操作将永远不会看到仅更新了部分字段的文档。However, although a client may not see a partially updated document, read uncommitted means that concurrent read operations may still see the updated document before the changes are made durable.然而,尽管客户端可能看不到部分更新的文档,但未提交的读取意味着并发读取操作在更改持久之前仍可能看到更新的文档。

With a standalone mongod instance, a set of read and write operations to a single document is serializable. With a replica set, a set of read and write operations to a single document is serializable only in the absence of a rollback.使用独立的mongod实例,可以序列化对单个文档的一组读写操作。使用副本集,只有在没有回滚的情况下,才能序列化对单个文档的一组读写操作。

Read Uncommitted And Multiple Document Write读取未提交和多文档写入

When a single write operation (e.g. db.collection.updateMany()) modifies multiple documents, the modification of each document is atomic, but the operation as a whole is not atomic.当单个写入操作(例如db.collection.updateMany())修改多个文档时,每个文档的修改都是原子性的,但整个操作不是原子性的。

When performing multi-document write operations, whether through a single write operation or multiple write operations, other operations may interleave.在执行多文档写入操作时,无论是通过单个写入操作还是多个写入操作,其他操作都可能交织在一起。

For situations that require atomicity of reads and writes to multiple documents (in a single or multiple collections), MongoDB supports distributed transactions, including transactions on replica sets and sharded clusters.对于需要对多个文档(在单个或多个集合中)进行读写原子性的情况,MongoDB支持分布式事务,包括副本集和分片集群上的事务。

For more information, see transactions有关更多信息,请参阅事务

Important

In most cases, a distributed transaction incurs a greater performance cost over single document writes, and the availability of distributed transactions should not be a replacement for effective schema design. 在大多数情况下,分布式事务比单文档写入产生更大的性能成本,分布式事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. 对于许多场景,非规范化数据模型(嵌入式文档和数组)将继续是数据和用例的最佳选择。That is, for many scenarios, modeling your data appropriately will minimize the need for distributed transactions.也就是说,对于许多场景,适当地对数据进行建模将最大限度地减少对分布式事务的需求。

For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和oplog大小限制),另请参阅生产注意事项

Without isolating the multi-document write operations, MongoDB exhibits the following behavior:在不隔离多文档写入操作的情况下,MongoDB表现出以下行为:

  1. Non-point-in-time read operations. Suppose a read operation begins at time t 1 and starts reading documents. A write operation then commits an update to one of the documents at some later time t 2. 非时间点读取操作。假设读取操作在时间t 1开始,并开始读取文档。然后,写入操作在稍后的时间t 2向其中一个文档提交更新。The reader may see the updated version of the document, and therefore does not see a point-in-time snapshot of the data.读者可能会看到文档的更新版本,因此看不到数据的时间点快照。
  2. Non-serializable operations. Suppose a read operation reads a document d 1 at time t 1 and a write operation updates d 1 at some later time t 3. 不可序列化的操作。假设读取操作在时间t 1读取文档d 1,写入操作在稍后的时间t 3更新d 1This introduces a read-write dependency such that, if the operations were to be serialized, the read operation must precede the write operation. 这引入了一种读写依赖关系,因此,如果要序列化操作,读取操作必须先于写入操作。But also suppose that the write operation updates document d 2 at time t 2 and the read operation subsequently reads d 2 at some later time t 4. 但也假设写操作在时间t 2更新文档d 2,而读操作随后在稍后的时间t 4读取d 2This introduces a write-read dependency which would instead require the read operation to come after the write operation in a serializable schedule. There is a dependency cycle which makes serializability impossible.这引入了一种写-读依赖关系,它要求在可序列化的计划中,读操作在写操作之后进行。存在一个依赖循环,这使得序列化变得不可能。
  3. Reads may miss matching documents that are updated during the course of the read operation.读取可能会错过在读取操作过程中更新的匹配文档。

Cursor Snapshot游标快照

MongoDB cursors can return the same document more than once in some situations. As a cursor returns documents, other operations may interleave with the query. If one of these operations changes the indexed field on the index used by the query, then the cursor could return the same document more than once.在某些情况下,MongoDB游标可以多次返回同一文档。当游标返回文档时,其他操作可能会与查询交织在一起。如果其中一个操作更改了查询所使用的索引上的索引字段,则游标可能会多次返回同一文档。

Queries that use unique indexes can, in some cases, return duplicate values. If a cursor using a unique index interleaves with a delete and insert of documents sharing the same unique value, the cursor may return the same unique value twice from different documents.在某些情况下,使用唯一索引的查询可能会返回重复值。如果使用唯一索引的游标与共享相同唯一值的文档的删除和插入交错,则游标可能会从不同文档中返回相同的唯一值两次。

Consider using read isolation. To learn more, see Read Concern "snapshot".考虑使用读取隔离。要了解更多信息,请参阅读取关注"snapshot"

Monotonic Writes单调写

MongoDB provides monotonic write guarantees, by default, for standalone mongod instances and replica sets.默认情况下,MongoDB为独立的mongod实例和副本集提供单调写保证。

For monotonic writes and sharded clusters, see Causal Consistency.关于单调写入和分片集群,请参阅因果一致性

Writes that don't modify any documents are known as no-operation noop writes. No-operation writes:不修改任何文档的写入称为无操作noop写入。没有操作写入:

  • Occur if the filter for the write doesn't match any documents, or the matched documents are unchanged after applying the write.如果写入的筛选器与任何文档都不匹配,或者应用写入后匹配的文档没有变化,则会发生这种情况。
  • Don't increase the optime value.不要增加最优值。
  • Return WriteResult.nModified equal to 0, which indicates that the write operation didn't modify any documents.返回WriteResult.nModified等于0,这表示写入操作没有修改任何文档。

To ensure monotonicity with no-operation writes, use causal consistency guarantees. For all other writes, the monotonicity guarantees are described in the following sections.为了确保没有操作写入的单调性,请使用因果一致性保证。对于所有其他写入,单调性保证将在以下部分中描述。

Real Time Order实时订单

For read and write operations on the primary, issuing read operations with "linearizable" read concern and write operations with "majority" write concern enables multiple threads to perform reads and writes on a single document as if a single thread performed these operations in real time; that is, the corresponding schedule for these reads and writes is considered linearizable.对于主服务器上的读写操作,发出具有"linearizable"读关注的读操作和具有"majority"写关注的写操作,使多个线程能够在单个文档上执行读写,就像单个线程实时执行这些操作一样;也就是说,这些读写的相应时间表被认为是可线性化的。

Causal Consistency因果一致性

If an operation logically depends on a preceding operation, there is a causal relationship between the operations. For example, a write operation that deletes all documents based on a specified condition and a subsequent read operation that verifies the delete operation have a causal relationship.如果一个操作在逻辑上依赖于前一个操作,那么这些操作之间就存在因果关系。例如,根据指定条件删除所有文档的写入操作和验证删除操作的后续读取操作之间存在因果关系。

With causally consistent sessions, MongoDB executes causal operations in an order that respect their causal relationships, and clients observe results that are consistent with the causal relationships.通过因果一致的会话,MongoDB按照尊重因果关系的顺序执行因果操作,客户端观察到与因果关系一致的结果。

Client Sessions and Causal Consistency Guarantees客户端会话和因果一致性保证

To provide causal consistency, MongoDB enables causal consistency in client sessions. A causally consistent session denotes that the associated sequence of read operations with "majority" read concern and write operations with "majority" write concern have a causal relationship that is reflected by their ordering.为了提供因果一致性,MongoDB在客户端会话中实现了因果一致性。因果一致的会话表示,具有"majority"读关注的读操作和具有"majority"写关注的写操作的相关序列具有因果关系,这可以通过它们的顺序来反映。 Applications must ensure that only one thread at a time executes these operations in a client session.应用程序必须确保一次只有一个线程在客户端会话中执行这些操作。

For causally related operations:对于因果相关操作:

  1. A client starts a client session.客户端启动客户端会话。

    Important

    Client sessions only guarantee causal consistency for:客户会话仅保证以下方面的因果一致性:

    • Read operations with "majority"; i.e. the return data has been acknowledged by a majority of the replica set members and is durable.读取"majority"操作;即,返回数据已被大多数副本集成员确认,并且是持久的。
    • Write operations with "majority" write concern; i.e. the write operations that request acknowledgment that the operation has been applied to a majority of the replica set's voting members.具有"majority"写入关注的写入操作;即请求确认该操作已应用于副本集的大多数投票成员的写入操作。

    For more information on causal consistency and various read and write concerns, see Causal Consistency and Read and Write Concerns.有关因果一致性和各种读写入关注的更多信息,请参阅因果一致性与读写入关注

  2. As the client issues a sequence of read with "majority" read concern and write operations (with "majority" write concern), the client includes the session information with each operation.当客户端发出一系列具有"majority"读关注和写关注的读操作(具有"majority"写关注)时,客户端会在每个操作中包含会话信息。
  3. For each read operation with "majority" read concern and write operation with "majority" write concern associated with the session, MongoDB returns the operation time and the cluster time, even if the operation errors. The client session keeps track of the operation time and the cluster time.对于与会话关联的每个具有"majority"读关注的读操作和具有"majority"写关注的写操作,即使操作出错,MongoDB也会返回操作时间和集群时间。客户端会话跟踪操作时间和集群时间。

    Note

    MongoDB does not return the operation time and the cluster time for unacknowledged (w: 0) write operations. Unacknowledged writes do not imply any causal relationship.MongoDB不返回未确认(w:0)写操作的操作时间和集群时间。未经确认的写作并不意味着任何因果关系。

    Although, MongoDB returns the operation time and the cluster time for read operations and acknowledged write operations in a client session, only the read operations with "majority" read concern and write operations with "majority" write concern can guarantee causal consistency. 虽然MongoDB返回客户端会话中读取操作和确认的写入操作的操作时间和集群时间,但只有具有"majority"读取关注的读取操作和具有"majority"写入关注的写入操作才能保证因果一致性。For details, see Causal Consistency and Read and Write Concerns.有关详细信息,请参阅因果一致性和读写入关注

  4. The associated client session tracks these two time fields.关联的客户端会话跟踪这两个时间字段。

    Note

    Operations can be causally consistent across different sessions. MongoDB drivers and mongosh provide the methods to advance the operation time and the cluster time for a client session. 不同会话之间的操作可能是因果一致的。MongoDB驱动程序和mongosh提供了提前客户端会话的操作时间和集群时间的方法。So, a client can advance the cluster time and the operation time of one client session to be consistent with the operations of another client session.因此,客户端可以提前集群时间和一个客户端会话的操作时间,使其与另一个客户端对话的操作保持一致。

Causal Consistency Guarantees因果一致性保证

The following table lists the causal consistency guarantees provided by causally consistent sessions for read operations with "majority" read concern and write operations with "majority" write concern.下表列出了因果一致会话为具有"majority"读取关注的读操作和具有"majority"写入关注的写操作提供的因果一致性保证。

Guarantees保证Description描述
Read your writes读出先前的写入Read operations reflect the results of write operations that precede them.读取操作反映了它们之前的写入操作的结果。
Monotonic reads单调读

Read operations do not return results that correspond to an earlier state of the data than a preceding read operation.读取操作不会返回与比前一次读取操作更早的数据状态相对应的结果。

For example, if in a session:例如,如果在会话中:

  • write 1 precedes write 2,
  • read 1 precedes read 2, and
  • read 1 returns results that reflect write 2

then read 2 cannot return results of write 1.

Monotonic writes单调写

Write operations that must precede other writes are executed before those other writes.

For example, if write 1 must precede write 2 in a session, the state of the data at the time of write 2 must reflect the state of the data post write 1. Other writes can interleave between write 1 and write write 2, but write 2 cannot occur before write 1.

Writes follow reads写入后跟随着读取Write operations that must occur after read operations are executed after those read operations. That is, the state of the data at the time of the write must incorporate the state of the data of the preceding read operations.在读取操作之后执行读取操作后必须发生的写入操作。也就是说,写入时的数据状态必须包含先前读取操作的数据状态。

Read Preference

These guarantees hold across all members of the MongoDB deployment. For example, if, in a causally consistent session, you issue a write with "majority" write concern followed by a read that reads from a secondary (i.e. read preference secondary) with "majority" read concern, the read operation will reflect the state of the database after the write operation.

Isolation

Operations within a causally consistent session are not isolated from operations outside the session. If a concurrent write operation interleaves between the session's write and read operations, the session's read operation may return results that reflect a write operation that occurred after the session's write operation.

MongoDB Drivers

Tip

Applications must ensure that only one thread at a time executes these operations in a client session.

Clients require MongoDB drivers updated for MongoDB 3.6 or later:

Java 3.6+

Python 3.6+

C 1.9+

Go 1.8+

C# 2.5+

Node 3.0+

Ruby 2.5+

Rust 2.1+

Swift 1.2+

Perl 2.0+

PHPC 1.4+

Scala 2.2+

C++ 3.6.6+

Examples示例

Important

Causally consistent sessions can only guarantee causal consistency for reads with "majority" read concern and writes with "majority" write concern.

Consider a collection items that maintains the current and historical data for various items. Only the historical data has a non-null end date. If the sku value for an item changes, the document with the old sku value needs to be updated with the end date, after which the new document is inserted with the current sku value. The client can use a causally consistent session to ensure that the update occurs before the insert.


➤ Use the Select your language drop-down menu in the upper-right to set the language of this example.


C
 /* Use a causally-consistent session to run some operations. */

wc = mongoc_write_concern_new ();
mongoc_write_concern_set_wmajority (wc, 1000);
mongoc_collection_set_write_concern (coll, wc);

rc = mongoc_read_concern_new ();
mongoc_read_concern_set_level (rc, MONGOC_READ_CONCERN_LEVEL_MAJORITY);
mongoc_collection_set_read_concern (coll, rc);

session_opts = mongoc_session_opts_new ();
mongoc_session_opts_set_causal_consistency (session_opts, true);

session1 = mongoc_client_start_session (client, session_opts, &error);
if (!session1) {
fprintf (stderr, "couldn't start session: %s\n", error.message);
goto cleanup;
}

/* Run an update_one with our causally-consistent session. */
update_opts = bson_new ();
res = mongoc_client_session_append (session1, update_opts, &error);
if (!res) {
fprintf (stderr, "couldn't add session to opts: %s\n", error.message);
goto cleanup;
}

query = BCON_NEW ("sku", "111");
update = BCON_NEW ("$set", "{", "end",
BCON_DATE_TIME (bson_get_monotonic_time ()), "}");
res = mongoc_collection_update_one (coll,
query,
update,
update_opts,
NULL, /* reply */
&error);

if (!res) {
fprintf (stderr, "update failed: %s\n", error.message);
goto cleanup;
}

/* Run an insert with our causally-consistent session */
insert_opts = bson_new ();
res = mongoc_client_session_append (session1, insert_opts, &error);
if (!res) {
fprintf (stderr, "couldn't add session to opts: %s\n", error.message);
goto cleanup;
}

insert = BCON_NEW ("sku", "nuts-111", "name", "Pecans",
"start", BCON_DATE_TIME (bson_get_monotonic_time ()));
res = mongoc_collection_insert_one (coll, insert, insert_opts, NULL, &error);
if (!res) {
fprintf (stderr, "insert failed: %s\n", error.message);
goto cleanup;
}
C#
using (var session1 = client.StartSession(new ClientSessionOptions { CausalConsistency = true }))
{
var currentDate = DateTime.UtcNow.Date;
var items = client.GetDatabase(
"test",
new MongoDatabaseSettings
{
ReadConcern = ReadConcern.Majority,
WriteConcern = new WriteConcern(
WriteConcern.WMode.Majority,
TimeSpan.FromMilliseconds(1000))
})
.GetCollection<BsonDocument>("items");

items.UpdateOne(session1,
Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("sku", "111"),
Builders<BsonDocument>.Filter.Eq("end", BsonNull.Value)),
Builders<BsonDocument>.Update.Set("end", currentDate));

items.InsertOne(session1, new BsonDocument
{
{"sku", "nuts-111"},
{"name", "Pecans"},
{"start", currentDate}
});
}
Java(Sync)
// Example 1: Use a causally consistent session to ensure that the update occurs before the insert.
ClientSession session1 = client.startSession(ClientSessionOptions.builder().causallyConsistent(true).build());
Date currentDate = new Date();
MongoCollection<Document> items = client.getDatabase("test")
.withReadConcern(ReadConcern.MAJORITY)
.withWriteConcern(WriteConcern.MAJORITY.withWTimeout(1000, TimeUnit.MILLISECONDS))
.getCollection("test");

items.updateOne(session1, eq("sku", "111"), set("end", currentDate));

Document document = new Document("sku", "nuts-111")
.append("name", "Pecans")
.append("start", currentDate);
items.insertOne(session1, document);
Motor
  async with await client.start_session(causal_consistency=True) as s1:
current_date = datetime.datetime.today()
items = client.get_database(
"test",
read_concern=ReadConcern("majority"),
write_concern=WriteConcern("majority", wtimeout=1000),
).items
await items.update_one(
{"sku": "111", "end": None}, {"$set": {"end": current_date}}, session=s1
)
await items.insert_one(
{"sku": "nuts-111", "name": "Pecans", "start": current_date}, session=s1
)
PHP
$items = $client->selectDatabase(
'test',
[
'readConcern' => new \MongoDB\Driver\ReadConcern(\MongoDB\Driver\ReadConcern::MAJORITY),
'writeConcern' => new \MongoDB\Driver\WriteConcern(\MongoDB\Driver\WriteConcern::MAJORITY, 1000),
],
)->items;

$s1 = $client->startSession(
['causalConsistency' => true],
);

$currentDate = new \MongoDB\BSON\UTCDateTime();

$items->updateOne(
['sku' => '111', 'end' => ['$exists' => false]],
['$set' => ['end' => $currentDate]],
['session' => $s1],
);
$items->insertOne(
['sku' => '111-nuts', 'name' => 'Pecans', 'start' => $currentDate],
['session' => $s1],
);
Python
with client.start_session(causal_consistency=True) as s1:
current_date = datetime.datetime.today()
items = client.get_database(
"test",
read_concern=ReadConcern("majority"),
write_concern=WriteConcern("majority", wtimeout=1000),
).items
items.update_one(
{"sku": "111", "end": None}, {"$set": {"end": current_date}}, session=s1
)
items.insert_one(
{"sku": "nuts-111", "name": "Pecans", "start": current_date}, session=s1
)
Swift(Async)
let s1 = client1.startSession(options: ClientSessionOptions(causalConsistency: true))
let currentDate = Date()
var dbOptions = MongoDatabaseOptions(
readConcern: .majority,
writeConcern: try .majority(wtimeoutMS: 1000)
)
let items = client1.db("test", options: dbOptions).collection("items")
let result1 = items.updateOne(
filter: ["sku": "111", "end": .null],
update: ["$set": ["end": .datetime(currentDate)]],
session: s1
).flatMap { _ in
items.insertOne(["sku": "nuts-111", "name": "Pecans", "start": .datetime(currentDate)], session: s1)
}
Swift(Sync)
let s1 = client1.startSession(options: ClientSessionOptions(causalConsistency: true))
let currentDate = Date()
var dbOptions = MongoDatabaseOptions(
readConcern: .majority,
writeConcern: try .majority(wtimeoutMS: 1000)
)
let items = client1.db("test", options: dbOptions).collection("items")
try items.updateOne(
filter: ["sku": "111", "end": .null],
update: ["$set": ["end": .datetime(currentDate)]],
session: s1
)
try items.insertOne(["sku": "nuts-111", "name": "Pecans", "start": .datetime(currentDate)], session: s1)

If another client needs to read all current sku values, you can advance the cluster time and the operation time to that of the other session to ensure that this client is causally consistent with the other session and read after the two writes:

C
 /* Make a new session, session2, and make it causally-consistent
* with session1, so that session2 will read session1's writes. */
session2 = mongoc_client_start_session (client, session_opts, &error);
if (!session2) {
fprintf (stderr, "couldn't start session: %s\n", error.message);
goto cleanup;
}

/* Set the cluster time for session2 to session1's cluster time */
cluster_time = mongoc_client_session_get_cluster_time (session1);
mongoc_client_session_advance_cluster_time (session2, cluster_time);

/* Set the operation time for session2 to session2's operation time */
mongoc_client_session_get_operation_time (session1, &timestamp, &increment);
mongoc_client_session_advance_operation_time (session2,
timestamp,
increment);

/* Run a find on session2, which should now find all writes done
* inside of session1 */
find_opts = bson_new ();
res = mongoc_client_session_append (session2, find_opts, &error);
if (!res) {
fprintf (stderr, "couldn't add session to opts: %s\n", error.message);
goto cleanup;
}

find_query = BCON_NEW ("end", BCON_NULL);
read_prefs = mongoc_read_prefs_new (MONGOC_READ_SECONDARY);
cursor = mongoc_collection_find_with_opts (coll,
query,
find_opts,
read_prefs);

while (mongoc_cursor_next (cursor, &result)) {
json = bson_as_relaxed_extended_json (result, NULL);
fprintf (stdout, "Document: %s\n", json);
bson_free (json);
}

if (mongoc_cursor_error (cursor, &error)) {
fprintf (stderr, "cursor failure: %s\n", error.message);
goto cleanup;
}
C#
using (var session2 = client.StartSession(new ClientSessionOptions { CausalConsistency = true }))
{
session2.AdvanceClusterTime(session1.ClusterTime);
session2.AdvanceOperationTime(session1.OperationTime);

var items = client.GetDatabase(
"test",
new MongoDatabaseSettings
{
ReadPreference = ReadPreference.Secondary,
ReadConcern = ReadConcern.Majority,
WriteConcern = new WriteConcern(WriteConcern.WMode.Majority, TimeSpan.FromMilliseconds(1000))
})
.GetCollection<BsonDocument>("items");

var filter = Builders<BsonDocument>.Filter.Eq("end", BsonNull.Value);
foreach (var item in items.Find(session2, filter).ToEnumerable())
{
// process item
}
}
Java(Sync)
// Example 2: Advance the cluster time and the operation time to that of the other session to ensure that
// this client is causally consistent with the other session and read after the two writes.
ClientSession session2 = client.startSession(ClientSessionOptions.builder().causallyConsistent(true).build());
session2.advanceClusterTime(session1.getClusterTime());
session2.advanceOperationTime(session1.getOperationTime());

items = client.getDatabase("test")
.withReadPreference(ReadPreference.secondary())
.withReadConcern(ReadConcern.MAJORITY)
.withWriteConcern(WriteConcern.MAJORITY.withWTimeout(1000, TimeUnit.MILLISECONDS))
.getCollection("items");

for (Document item: items.find(session2, eq("end", BsonNull.VALUE))) {
System.out.println(item);
}
Motor
  async with await client.start_session(causal_consistency=True) as s2:
s2.advance_cluster_time(s1.cluster_time)
s2.advance_operation_time(s1.operation_time)

items = client.get_database(
"test",
read_preference=ReadPreference.SECONDARY,
read_concern=ReadConcern("majority"),
write_concern=WriteConcern("majority", wtimeout=1000),
).items
async for item in items.find({"end": None}, session=s2):
print(item)
PHP
$s2 = $client->startSession(
['causalConsistency' => true],
);
$s2->advanceClusterTime($s1->getClusterTime());
$s2->advanceOperationTime($s1->getOperationTime());

$items = $client->selectDatabase(
'test',
[
'readPreference' => new \MongoDB\Driver\ReadPreference(\MongoDB\Driver\ReadPreference::SECONDARY),
'readConcern' => new \MongoDB\Driver\ReadConcern(\MongoDB\Driver\ReadConcern::MAJORITY),
'writeConcern' => new \MongoDB\Driver\WriteConcern(\MongoDB\Driver\WriteConcern::MAJORITY, 1000),
],
)->items;

$result = $items->find(
['end' => ['$exists' => false]],
['session' => $s2],
);
foreach ($result as $item) {
var_dump($item);
}
Python
with client.start_session(causal_consistency=True) as s2:
s2.advance_cluster_time(s1.cluster_time)
s2.advance_operation_time(s1.operation_time)

items = client.get_database(
"test",
read_preference=ReadPreference.SECONDARY,
read_concern=ReadConcern("majority"),
write_concern=WriteConcern("majority", wtimeout=1000),
).items
for item in items.find({"end": None}, session=s2):
print(item)
Swift(Async)
let options = ClientSessionOptions(causalConsistency: true)
let result2: EventLoopFuture<Void> = client2.withSession(options: options) { s2 in
// The cluster and operation times are guaranteed to be non-nil since we already used s1 for operations above.
s2.advanceClusterTime(to: s1.clusterTime!)
s2.advanceOperationTime(to: s1.operationTime!)

dbOptions.readPreference = .secondary
let items2 = client2.db("test", options: dbOptions).collection("items")

return items2.find(["end": .null], session: s2).flatMap { cursor in
cursor.forEach { item in
print(item)
}
}
}
Swift(sync)
try client2.withSession(options: ClientSessionOptions(causalConsistency: true)) { s2 in
// The cluster and operation times are guaranteed to be non-nil since we already used s1 for operations above.
s2.advanceClusterTime(to: s1.clusterTime!)
s2.advanceOperationTime(to: s1.operationTime!)

dbOptions.readPreference = .secondary
let items2 = client2.db("test", options: dbOptions).collection("items")
for item in try items2.find(["end": .null], session: s2) {
print(item)
}
}

Limitations局限性

The following operations that build in-memory structures are not causally consistent:

OperationNotes备注
collStats
$collStats with latencyStats option.
$currentOpReturns an error if the operation is associated with a causally consistent client session.
createIndexes
dbHash
dbStats
getMoreReturns an error if the operation is associated with a causally consistent client session.
$indexStats
mapReduce
pingReturns an error if the operation is associated with a causally consistent client session.
serverStatusReturns an error if the operation is associated with a causally consistent client session.
validate