db.collection.bulkWrite()
On this page本页内容
Definition定义
db.collection.bulkWrite()
- Important
mongosh Method
This page documents a
mongosh
method. This is not the documentation for a language-specific driver, such as Node.js.For MongoDB API drivers, refer to the language-specific MongoDB driver documentation.
Performs multiple write operations with controls for order of execution.使用控制执行顺序的控件执行多个写入操作。db.collection.bulkWrite()
has the following syntax:具有以下语法:db.collection.bulkWrite(
[ <operation 1>, <operation 2>, ... ],
{
writeConcern : <document>,
ordered : <boolean>
}
)Parameter参数Type类型Description描述operations
array An array ofbulkWrite()
write operations.bulkWrite()
写入操作的数组。
Valid operations are:有效操作包括:See Write Operations for usage of each operation.有关每个操作的用法,请参阅写入操作。writeConcern
document Optional.可选的。A document expressing the write concern. Omit to use the default write concern.表示写入关注的文档。忽略使用默认的写入关注。
Do not explicitly set the write concern for the operation if run in a transaction.如果在事务中运行,请不要显式设置操作的写入关注。To use write concern with transactions, see Transactions and Write Concern.要在事务中使用写入关注,请参阅事务和写入关注。ordered
boolean Optional.可选的。A boolean specifying whether the一个布尔值,指定mongod
instance should perform an ordered or unordered operation execution.mongod
实例应该执行有序操作还是无序操作。Defaults to默认为true
.true
。
See Execution of Operations请参阅操作执行Returns:返回值:A boolean如果操作在有写入关注的情况下运行,则布尔值acknowledged
astrue
if the operation ran with write concern orfalse
if write concern was disabled.acknowledged
为true
;如果写入关注被禁用,则布尔值acknowledged
为false
。A count for each write operation.每个写入操作的计数。An array containing an一个数组,其中包含每个成功插入或打乱排序的文档的_id
for each successfully inserted or upserted documents._id
。
Behavior行为
bulkWrite()
takes an array of write operations and executes each of them. 获取一组写操作并执行其中的每一个。By default operations are executed in order. 默认情况下,操作按顺序执行。See Execution of Operations for controlling the order of write operation execution.有关控制写入操作执行顺序的信息,请参阅操作执行。
Write Operations写入操作
insertOne
Inserts a single document into the collection.在集合中插入单个文档。
db.collection.bulkWrite( [
{ insertOne : { "document" : <document> } }
] )
See 请参见数据库集合db.collection.insertOne()
.db.collection.insertOne()
。
updateOne
and 和updateMany
updateOne
updates a single document in the collection that matches the filter. 更新集合中与筛选器匹配的单个文档。If multiple documents match, 如果多个文档匹配,updateOne
will update the first matching document only.updateOne
将只更新第一个匹配的文档。
db.collection.bulkWrite( [
{ updateOne :
{
"filter": <document>,
"update": <document or pipeline>, // Changed in 4.2
"upsert": <boolean>,
"collation": <document>, // Available starting in 3.4
"arrayFilters": [ <filterdocument1>, ... ], // Available starting in 3.6
"hint": <document|string> // Available starting in 4.2.1
}
}
] )
updateMany
updates all documents in the collection that match the filter.updateMany
更新集合中与筛选器匹配的所有文档。
db.collection.bulkWrite( [
{ updateMany :
{
"filter" : <document>,
"update" : <document or pipeline>, // Changed in MongoDB 4.2
"upsert" : <boolean>,
"collation": <document>, // Available starting in 3.4
"arrayFilters": [ <filterdocument1>, ... ], // Available starting in 3.6
"hint": <document|string> // Available starting in 4.2.1
}
}
] )
filter | db.collection.find() method are available.db.collection.find() 方法中相同的查询选择器。 |
update |
|
upsert | upsert is false . upsert 为false 。 |
arrayFilters | |
collation | |
hint | filter . filter 的索引。 |
For details, see 有关详细信息,请参阅db.collection.updateOne()
and db.collection.updateMany()
.db.collection.updateOne()
和db.collection.updateMany()
。
replaceOne
replaceOne
replaces a single document in the collection that matches the filter. 替换集合中与筛选器匹配的单个文档。If multiple documents match, 如果多个文档匹配,replaceOne
will replace the first matching document only.replaceOne
将仅替换第一个匹配的文档。
db.collection.bulkWrite([
{ replaceOne :
{
"filter" : <document>,
"replacement" : <document>,
"upsert" : <boolean>,
"collation": <document>, // Available starting in 3.4
"hint": <document|string> // Available starting in 4.2.1
}
}
] )
filter | db.collection.find() method are available.db.collection.find() 方法中相同的查询选择器。 |
replacement | |
upsert | upsert is false .upsert 为false 。 |
collation | |
hint | filter . If you specify an index that does not exist, the operation errors. filter 的索引。如果指定的索引不存在,则操作将出错。 |
For details, see to 有关详细信息,请参阅db.collection.replaceOne()
.db.collection.replaceOne()
。
deleteOne
and 和deleteMany
deleteOne
deletes a single document in the collection that match the filter. 删除集合中与筛选器匹配的单个文档。If multiple documents match, 如果多个文档匹配,deleteOne
will delete the first matching document only.deleteOne
将只删除第一个匹配的文档。
db.collection.bulkWrite([
{ deleteOne : {
"filter" : <document>,
"collation" : <document> // Available starting in 3.4
} }
] )
deleteMany
deletes all documents in the collection that match the filter.删除集合中与筛选器匹配的所有文档。
db.collection.bulkWrite([
{ deleteMany: {
"filter" : <document>,
"collation" : <document> // Available starting in 3.4
} }
] )
filter | db.collection.find() method are available.db.collection.find() 方法中相同的查询选择器。 |
collation |
For details, see 有关详细信息,请参阅db.collection.deleteOne()
and db.collection.deleteMany()
.db.collection.deleteOne()
和db.collection.deleteMany()
。
_id
Field字段
If the document does not specify an _id field, then 如果文档没有指定mongod
adds the _id
field and assign a unique ObjectId()
for the document before inserting or upserting it. _id
字段,那么mongod
会添加_id
字段并在插入或追加文档之前为文档分配一个唯一的ObjectId()
。Most drivers create an ObjectId and insert the 大多数驱动程序创建一个_id
field, but the mongod
will create and populate the _id
if the driver or application does not.ObjectId
并插入_id
字段,但如果驱动程序或应用程序没有创建并填充_id
,mongod
将创建并填充。
If the document contains an 如果文档包含_id
field, the _id
value must be unique within the collection to avoid duplicate key error._id
字段,则_id
值在集合中必须是唯一的,以避免重复的键错误。
Update or replace operations cannot specify an 更新或替换操作不能指定与原始文档不同的_id
value that differs from the original document._id
值。
Execution of Operations操作的执行
The ordered参数指定ordered
parameter specifies whether bulkWrite()
will execute operations in order or not. By default, operations are executed in order.bulkWrite()
是否按顺序执行操作。默认情况下,操作按顺序执行。
The following code represents a 以下代码表示一个包含五个操作的bulkWrite()
with five operations.bulkWrite()
。
db.collection.bulkWrite(
[
{ insertOne : <document> },
{ updateOne : <document> },
{ updateMany : <document> },
{ replaceOne : <document> },
{ deleteOne : <document> },
{ deleteMany : <document> }
]
)
In the default 在默认的ordered : true
state, each operation will be executed in order, from the first operation insertOne
to the last operation deleteMany
.orderd:true
状态下,每个操作都将按顺序执行,从第一个操作insertOne
到最后一个操作deleteMany
。
If 如果ordered
is set to false, operations may be reordered by mongod
to increase performance. ordered
设置为false
,mongod
可以重新排序操作以提高性能。Applications should not depend on order of operation execution.应用程序不应依赖于操作执行的顺序。
The following code represents an unordered 以下代码表示一个包含六个操作的无序bulkWrite()
with six operations:bulkWrite()
:
db.collection.bulkWrite(
[
{ insertOne : <document> },
{ updateOne : <document> },
{ updateMany : <document> },
{ replaceOne : <document> },
{ deleteOne : <document> },
{ deleteMany : <document> }
],
{ ordered : false }
)
With 使用ordered : false
, the results of the operation may vary. ordered : false
时,操作的结果可能会有所不同。For example, the 例如,deleteOne
or deleteMany
may remove more or fewer documents depending on whether the run before or after the insertOne
, updateOne
, updateMany
, or replaceOne
operations.deleteOne
或deleteMany
可能会删除更多或更少的文档,具体取决于在insertOne
、updateOne
、updateMany
或replaceOne
操作之前还是之后运行。
The number of operations in each group cannot exceed the value of the maxWriteBatchSize of the database. 每个组中的操作数不能超过数据库的maxWriteBatchSize
值。The default value of maxWriteBatchSize
is 100,000
. maxWriteBatchSize
的默认值为100000
。This value is shown in the 该值显示在hello.maxWriteBatchSize
field.hello.maxWriteBatchSize
字段中。
This limit prevents issues with oversized error messages. 此限制可防止出现错误消息过大的问题。If a group exceeds this limit, the client driver divides the group into smaller groups with counts less than or equal to the value of the limit. 如果一个组超过此限制,客户端驱动程序会将该组划分为计数小于或等于限制值的较小组。For example, with the 例如,当maxWriteBatchSize
value of 100,000
, if the queue consists of 200,000
operations, the driver creates 2 groups, each with 100,000
operations.maxWriteBatchSize
值为100000时,如果队列由200000
个操作组成,则驱动程序将创建两个组,每个组包含100000
个操作。
The driver only divides the group into smaller groups when using the high-level API. 当使用高级API时,驱动程序仅将组划分为较小的组。If using 如果直接使用db.runCommand()
directly (for example, when writing a driver), MongoDB throws an error when attempting to execute a write batch which exceeds the limit.db.runCommand()
(例如,在编写驱动程序时),MongoDB在尝试执行超过限制的写批时会抛出错误。
If the error report for a single batch grows too large, MongoDB truncates all remaining error messages to the empty string. 如果单个批次的错误报告过大,MongoDB会将所有剩余的错误消息截断为空字符串。If there are at least two error messages with total size greater than 如果至少有两条错误消息的总大小大于1MB
, they are trucated.1MB
,则对它们进行trucated。
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.尺寸和分组机制是内部性能细节,在未来版本中可能会发生变化。
Executing an 在分片集合上执行有序操作列表通常比执行无序列表慢,因为对于有序列表,每个操作都必须等待上一个操作完成。ordered
list of operations on a sharded collection will generally be slower than executing an unordered
list since with an ordered list, each operation must wait for the previous operation to finish.
Capped Collections封顶集合
bulkWrite()
write operations have restrictions when used on a capped collection.bulkWrite()
写操作在封顶集合上使用时有限制。
如果updateOne
and updateMany
throw a WriteError
if the update
criteria increases the size of the document being modified.update
条件增加了要修改的文档的大小,updateOne
和updateMany
将抛出WriteError
。
如果replaceOne
throws a WriteError
if the replacement
document has a larger size than the original document.replacement
文档的大小大于原始文档的大小,replaceOne
将抛出WriteError
。
如果deleteOne
and deleteMany
throw a WriteError
if used on a capped collection.deleteOne
和deleteMany
用于封顶集合,则抛出WriteError
。
Time Series Collections时间序列集合
bulkWrite()
write operations have restrictions when used on a time series collection. 写入操作在时间序列集合上使用时有限制。Only 只有insertOne
can be used on time series collections. All other operations will return a WriteError
.insertOne
可以用于时间序列集合。所有其他操作都将返回WriteError
。
Error Handling错误处理
db.collection.bulkWrite()
throws a 在出现错误时抛出BulkWriteError
exception on errors (unless the operation is part of a transaction on MongoDB 4.0). BulkWriteError
异常(除非该操作是MongoDB 4.0上事务的一部分)。See Error Handling inside Transactions.请参阅事务内部的错误处理。
Excluding write concern errors, ordered operations stop after an error, while unordered operations continue to process any remaining write operations in the queue, unless when run inside a transaction. See Error Handling inside Transactions.除了写入关注错误,有序操作在错误后停止,而无序操作继续处理队列中的任何剩余写操作,除非在事务内部运行。请参阅事务内部的错误处理。
Write concern errors are displayed in the 写入关注错误显示在writeConcernErrors
field, while all other errors are displayed in the writeErrors
field. writeConcernErrors
字段中,而所有其他错误都显示在writeErrors
字段。If an error is encountered, the number of successful write operations are displayed instead of the inserted 如果遇到错误,则显示成功写入操作的次数,而不是插入的_id
values. _id
值。Ordered operations display the single error encountered while unordered operations display each error in an array.有序操作显示遇到的单个错误,而无序操作显示数组中的每个错误。
Transactions事务
db.collection.bulkWrite()
can be used inside multi-document transactions.可以在多文档事务中使用。
In most cases, multi-document transaction incurs a greater performance cost over single document writes, and the availability of multi-document transactions should not be a replacement for effective schema design. 在大多数情况下,与单文档写入相比,多文档事务会产生更高的性能成本,并且多文档事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. That is, for many scenarios, modeling your data appropriately will minimize the need for multi-document transactions.对于许多场景,非规范化数据模型(嵌入文档和数组)将继续是您的数据和用例的最佳选择。也就是说,对于许多场景,对数据进行适当建模将最大限度地减少对多文档事务的需求。
For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和操作日志大小限制),请参阅生产注意事项。
Inserts and Upserts within Transactions事务中的插入和插入
For feature compatibility version (fcv) 对于功能兼容性版本(fcv)“4.4”及更高版本,如果在事务中针对不存在的集合运行具有"4.4"
and greater, if an insert operation or update operation with upsert: true
is run in a transaction against a non-existing collection, the collection is implicitly created.upsert:true
的插入操作或更新操作,则会隐式创建该集合。
You cannot create new collections in cross-shard write transactions. For example, if you write to an existing collection in one shard and implicitly create a collection in a different shard, MongoDB cannot perform both operations in the same transaction.不能在跨分片写入事务中创建新集合。例如,如果您在一个分片中写入一个现有集合,并在另一个分片中隐式创建一个集合,则MongoDB无法在同一事务中执行这两个操作。
For fcv 对于fcv“4.2”或更低版本,"4.2"
or less, the collection must already exist for insert and upsert: true
operations.insert
和upsert: true
操作的集合必须已经存在。
Write Concerns and Transactions撰写关注事项和事务
Do not explicitly set the write concern for the operation if run in a transaction. 如果在事务中运行,请不要显式设置操作的写入关注。To use write concern with transactions, see Transactions and Write Concern.要在事务中使用写入关注,请参阅事务和写入关注。
Error Handling inside Transactions事务内部错误处理
Starting in MongoDB 4.2, if a 从MongoDB 4.2开始,如果db.collection.bulkWrite()
operation encounters an error inside a transaction, the method throws a BulkWriteException (same as outside a transaction).db.collection.bulkWrite()
操作在事务内部遇到错误,该方法会抛出BulkWriteException
(与事务外部相同)。
In 4.0, if the 在4.0中,如果bulkWrite
operation encounters an error inside a transaction, the error thrown is not wrapped as a BulkWriteException
.bulkWrite
操作在事务内部遇到错误,则抛出的错误不会包装为BulkWriteException
。
Inside a transaction, the first error in a bulk write causes the entire bulk write to fail and aborts the transaction, even if the bulk write is unordered.在事务内部,大容量写入中的第一个错误会导致整个大容量写入失败并中止事务,即使大容量写入是无序的。
Examples实例
Ordered Bulk Write Example有序大容量写入示例
It is important that you understand 了解bulkWrite()
operation ordering and error handling. By default, bulkWrite()
runs an ordered list of operations:bulkWrite()
操作顺序和错误处理非常重要。默认情况下,bulkWrite()
运行一个有序的操作列表:
Operations run serially.操作以串行方式运行。If an operation has an error, that operation and any following operations are not run.如果某个操作出现错误,则不会运行该操作和任何后续操作。Operations listed before the error operation are completed.错误操作完成之前列出的操作。
The bulkWrite()
examples use the pizzas
collection:bulkWrite()
示例使用pizzas
集合:
db.pizzas.insertMany( [
{ _id: 0, type: "pepperoni", size: "small", price: 4 },
{ _id: 1, type: "cheese", size: "medium", price: 7 },
{ _id: 2, type: "vegan", size: "large", price: 8 }
] )
The following 下面的bulkWrite()
example runs these operations on the pizzas
collection:bulkWrite()
示例对pizzas
集合运行这些操作:
Adds two documents using使用insertOne
.insertOne
添加两个文档。Updates a document using使用updateOne
.updateOne
更新文档。Deletes a document using使用deleteOne
.deleteOne
删除文档。Replaces a document using使用replaceOne
.replaceOne
替换文档。
try {
db.pizzas.bulkWrite( [
{ insertOne: { document: { _id: 3, type: "beef", size: "medium", price: 6 } } },
{ insertOne: { document: { _id: 4, type: "sausage", size: "large", price: 10 } } },
{ updateOne: {
filter: { type: "cheese" },
update: { $set: { price: 8 } }
} },
{ deleteOne: { filter: { type: "pepperoni"} } },
{ replaceOne: {
filter: { type: "vegan" },
replacement: { type: "tofu", size: "small", price: 4 }
} }
] )
} catch( error ) {
print( error )
}
Example output, which includes a summary of the completed operations:示例输出,其中包括已完成操作的摘要:
{
acknowledged: true,
insertedCount: 2,
insertedIds: { '0': 3, '1': 4 },
matchedCount: 2,
modifiedCount: 2,
deletedCount: 1,
upsertedCount: 0,
upsertedIds: {}
}
If the collection already contained a document with an 如果在运行上一个_id
of 4
before running the previous bulkWrite()
example, the following duplicate key exception is returned for the second insertOne
operation:bulkWrite()
示例之前,集合已包含一个_id
为4
的文档,则第二个insertOne
操作将返回以下重复键异常:
writeErrors: [
WriteError {
err: {
index: 1,
code: 11000,
errmsg: 'E11000 duplicate key error collection: test.pizzas index: _id_ dup key: { _id: 4 }',
op: { _id: 4, type: 'sausage', size: 'large', price: 10 }
}
}
],
result: BulkWriteResult {
result: {
ok: 1,
writeErrors: [
WriteError {
err: {
index: 1,
code: 11000,
errmsg: 'E11000 duplicate key error collection: test.pizzas index: _id_ dup key: { _id: 4 }',
op: { _id: 4, type: 'sausage', size: 'large', price: 10 }
}
}
],
writeConcernErrors: [],
insertedIds: [ { index: 0, _id: 3 }, { index: 1, _id: 4 } ],
nInserted: 1,
nUpserted: 0,
nMatched: 0,
nModified: 0,
nRemoved: 0,
upserted: []
}
}
Because the 因为bulkWrite()
example is ordered, only the first insertOne
operation is completed.bulkWrite()
示例是有序的,所以只完成了第一个insertOne
操作。
To complete all operations that do not have errors, run 若要完成所有没有错误的操作,请在bulkWrite()
with ordered
set to false
. For an example, see the following section.ordered
设置为false
的情况下运行bulkWrite()
。有关示例,请参阅以下部分。
Unordered Bulk Write Example无序大容量写入示例
To specify an unordered 若要指定无序的bulkWrite()
, set ordered
to false
.bulkWrite()
,请将ordered
设置为false
。
In an unordered 在无序的bulkWrite()
list of operations:bulkWrite()
操作列表中:
Operations can run in parallel (not guaranteed).操作可以并行运行(不保证)。For details. See Ordered vs Unordered Operations.请参阅有序操作与无序操作以了解详细信息。Operations with errors are not completed.有错误的操作未完成。All operations without errors are completed.所有没有错误的操作都已完成。
Continuing the 继续pizzas
collection example, drop and recreate the collection:pizzas
集合示例,放下并重新创建该系列:
db.pizzas.insertMany( [
{ _id: 0, type: "pepperoni", size: "small", price: 4 },
{ _id: 1, type: "cheese", size: "medium", price: 7 },
{ _id: 2, type: "vegan", size: "large", price: 8 }
] )
In the following example:在以下示例中:
bulkWrite()
runs unordered operations on the对pizzas
collection.pizzas
集合执行无序操作。The second第二个insertOne
operation has the same_id
as the firstinsertOne
, which causes a duplicate key error.insertOne
操作具有与第一个insertOne
相同的_id
,这会导致重复键错误。
try {
db.pizzas.bulkWrite( [
{ insertOne: { document: { _id: 3, type: "beef", size: "medium", price: 6 } } },
{ insertOne: { document: { _id: 3, type: "sausage", size: "large", price: 10 } } },
{ updateOne: {
filter: { type: "cheese" },
update: { $set: { price: 8 } }
} },
{ deleteOne: { filter: { type: "pepperoni"} } },
{ replaceOne: {
filter: { type: "vegan" },
replacement: { type: "tofu", size: "small", price: 4 }
} }
],
{ ordered: false } )
} catch( error ) {
print( error )
}
Example output, which includes the duplicate key error and a summary of the completed operations:输出示例,其中包括重复键错误和已完成操作的摘要:
writeErrors: [
WriteError {
err: {
index: 1,
code: 11000,
errmsg: 'E11000 duplicate key error collection: test.pizzas index: _id_ dup key: { _id: 3 }',
op: { _id: 3, type: 'sausage', size: 'large', price: 10 }
}
}
],
result: BulkWriteResult {
result: {
ok: 1,
writeErrors: [
WriteError {
err: {
index: 1,
code: 11000,
errmsg: 'E11000 duplicate key error collection: test.pizzas index: _id_ dup key: { _id: 3 }',
op: { _id: 3, type: 'sausage', size: 'large', price: 10 }
}
}
],
writeConcernErrors: [],
insertedIds: [ { index: 0, _id: 3 }, { index: 1, _id: 3 } ],
nInserted: 1,
nUpserted: 0,
nMatched: 2,
nModified: 2,
nRemoved: 1,
upserted: []
}
}
The second 由于重复键错误,第二个insertOne
operation fails because of the duplicate key error. insertOne
操作失败。In an unordered 在无序的bulkWrite()
, any operation without an error is completed.bulkWrite()
中,任何没有错误的操作都将完成。
Bulk Write with Write Concern Example带写入关注的大容量写入示例
Continuing the 继续pizzas
collection example, drop and recreate the collection:pizzas
系列示例,放下并重新创建该系列:
db.pizzas.insertMany( [
{ _id: 0, type: "pepperoni", size: "small", price: 4 },
{ _id: 1, type: "cheese", size: "medium", price: 7 },
{ _id: 2, type: "vegan", size: "large", price: 8 }
] )
The following 下面的bulkWrite()
example runs operations on the pizzas
collection and sets a "majority"
write concern with a 100 millisecond timeout:bulkWrite()
示例对pizzas
集合运行操作,并设置一个"majority"
写入关注,超时时间为100毫秒:
try {
db.pizzas.bulkWrite( [
{ updateMany: {
filter: { size: "medium" },
update: { $inc: { price: 0.1 } }
} },
{ updateMany: {
filter: { size: "small" },
update: { $inc: { price: -0.25 } }
} },
{ deleteMany: { filter: { size: "large" } } },
{ insertOne: {
document: { _id: 4, type: "sausage", size: "small", price: 12 }
} } ],
{ writeConcern: { w: "majority", wtimeout: 100 } }
)
} catch( error ) {
print( error )
}
If the time for the majority of replica set members to acknowledge the operations exceeds 如果大多数副本集成员确认操作的时间超过wtimeout
, the example returns a write concern error and a summary of completed operations:wtimeout
,则此示例将返回一个写入关注错误和已完成操作的摘要:
result: BulkWriteResult {
result: {
ok: 1,
writeErrors: [],
writeConcernErrors: [
WriteConcernError {
err: {
code: 64,
codeName: 'WriteConcernFailed',
errmsg: 'waiting for replication timed out',
errInfo: { wtimeout: true, writeConcern: [Object] }
}
}
],
insertedIds: [ { index: 3, _id: 4 } ],
nInserted: 0,
nUpserted: 0,
nMatched: 2,
nModified: 2,
nRemoved: 0,
upserted: [],
opTime: { ts: Timestamp({ t: 1660329086, i: 2 }), t: Long("1") }
}
}