On this page本页内容
Tip
Starting in version 3.2, MongoDB also provides the 从3.2版开始,MongoDB还提供了db.collection.bulkWrite()
method for performing bulk write operations.db.collection.bulkWrite()
方法来执行批量写入操作。
Bulk.
execute
()¶Executes the list of operations built by the 执行Bulk()
operations builder.Bulk()
操作生成器生成的操作列表。
Bulk.execute()
accepts the following parameter:
writeConcern |
document |
|
Returns: | BulkWriteResult object that contains the status of the operation.BulkWriteResult 对象。 |
---|
After execution, you cannot re-execute the 执行后,如果不重新初始化,就无法重新执行Bulk()
object without reinitializing. Bulk()
对象。See 请参阅db.collection.initializeUnorderedBulkOp()
and db.collection.initializeOrderedBulkOp()
.db.collection.initializeUnorderedBulkOp()
和db.collection.initializeOrderedBulkOp()
。
When executing an 在执行有序的操作列表时,MongoDB根据操作类型和连续性对操作进行分组;即,相同类型的连续操作被分组在一起。ordered
list of operations, MongoDB groups the operations by the operation type
and contiguity; i.e. contiguous operations of the same type are grouped together. For example, if an ordered list has two insert operations followed by an update operation followed by another insert operation, MongoDB groups the operations into three separate groups: first group contains the two insert operations, second group contains the update operation, and the third group contains the last insert operation. 例如,如果一个有序列表有两个插入操作,然后是一个更新操作,然后是另一个插入操作,MongoDB会将这些操作分为三个单独的组:第一组包含两个插入操作,第二组包含更新操作,第三组包含最后一个插入操作。This behavior is subject to change in future versions.这种行为可能会在将来的版本中发生更改。
Each group of operations can have at most 每组操作最多可以有1000个操作。1000 operations
. If a group exceeds this 如果一个组超过此限制,MongoDB将把该组分成1000个或更少的小组。limit
, MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.例如,如果批量操作列表包含2000个插入操作,MongoDB将创建2个组,每个组包含1000个操作。
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.尺寸和分组机制是内部性能细节,在未来版本中可能会发生更改。
To see how the operations are grouped for a bulk operation execution, call 要查看批量操作执行时操作的分组方式,请在执行后调用Bulk.getOperations()
after the execution.Bulk.getOperations()
。
Executing an 在分片集合上执行操作的有序无序列表慢,因为对于有序>列表,每个操作都必须等待前一个操作完成。ordered
list of operations on a sharded collection will generally be slower than executing an unordered
list since with an ordered list, each operation must wait for the previous operation to finish.
When executing an 在执行无序的操作列表时,MongoDB会对操作进行分组。unordered
list of operations, MongoDB groups the operations. With an unordered bulk operation, the operations in the list may be reordered to increase performance. 对于无序批量操作,可以对列表中的操作重新排序以提高性能。As such, applications should not depend on the ordering when performing 因此,应用程序在执行无序批量操作时不应依赖于排序。unordered
bulk operations.
Each group of operations can have at most 每组操作最多可以有1000个操作。1000 operations
. If a group exceeds this 如果一个组超过此限制,MongoDB将把该组分成1000个或更少的小组。limit
, MongoDB will divide the group into smaller groups of 1000 or less. For example, if the bulk operations list consists of 2000 insert operations, MongoDB creates 2 groups, each with 1000 operations.例如,如果批量操作列表包含2000个插入操作,MongoDB将创建2个组,每个组包含1000个操作。
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.尺寸和分组机制是内部性能细节,在未来版本中可能会发生更改。
To see how the operations are grouped for a bulk operation execution, call 要查看批量操作执行时操作的分组方式,请在执行后调用Bulk.getOperations()
after the execution.Bulk.getOperations()
。
Bulk()
can be used inside multi-document transactions.可以在多文档事务中使用。
For 对于Bulk.find.insert()
operations, the collection must already exist.Bulk.find.insert()
操作,集合必须已经存在。
For 对于Bulk.find.upsert()
, if the operation results in an upsert, the collection must already exist.Bulk.find.upsert()
,如果操作导致upsert,则集合必须已经存在。
Do not explicitly set the write concern for the operation if run in a transaction. 如果在事务中运行,请不要显式设置操作的写入关注点。To use write concern with transactions, see Transactions and Write Concern.要将写关注点用于事务,请参阅事务和写入关注。
Important
In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. 在大多数情况下,与单文档写入相比,多文档事务会带来更大的性能成本,而多文档事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. 对于许多场景,非规范化数据模型(嵌入式文档和数组)将继续适合您的数据和用例。That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.也就是说,对于许多场景,适当地建模数据将最大限度地减少对多文档事务的需求。
For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和oplog大小限制),请参阅生产注意事项。
The following initializes a 以下内容初始化Bulk()
operations builder on the items
collection, adds a series of insert operations, and executes the operations:items
集合上的Bulk()
操作生成器,添加一系列插入操作,并执行这些操作:
The operation returns the following 该操作返回以下BulkWriteResult()
object:BulkWriteResult()
对象:
For details on the return object, see 有关返回对象的详细信息,请参阅BulkWriteResult()
. BulkWriteResult()
。For details on the batches executed, see 有关执行的批处理的详细信息,请参阅Bulk.getOperations()
.Bulk.getOperations()
。
The following operation to a replica set specifies a write concern of 以下对副本集的操作指定了一个写关注点"w: majority"
with a wtimeout
of 5000 milliseconds such that the method returns after the writes propagate to a majority of the voting replica set members or the method times out after 5 seconds."w: majority"
,wtimeout
为5000毫秒,这样该方法在写操作传播到大多数投票副本集成员后返回,或者该方法在5秒后超时。
The operation returns the following 该操作返回以下BulkWriteResult()
object:BulkWriteResult()
对象:
See
Bulk()
for a listing of methods available for bulk operations.查看可用于批量操作的方法列表。