Definition定义
db.collection.insertMany()Inserts multiple documents into a collection.将多个文档插入到集合中。Returns:返回A document containing:包含以下内容的文件:An一个acknowledgedboolean, set totrueif the operation ran with write concern orfalseif write concern was disabledacknowledged布尔值,如果操作运行时存在写入关注,则设置为true,如果禁用写入关注,设置为falseAninsertedIdsarray, containing_idvalues for each successfully inserted documentinsertedIds数组,包含每个成功插入文档的_id值
Compatibility兼容性
This method is available in deployments hosted in the following environments:此方法在以下环境中托管的部署中可用:
- MongoDB Atlas
: The fully managed service for MongoDB deployments in the cloud:云中MongoDB部署的完全托管服务
Note
This command is supported in all MongoDB Atlas clusters. For information on Atlas support for all commands, see Unsupported Commands.所有MongoDB Atlas集群都支持此命令。有关Atlas支持所有命令的信息,请参阅不支持的命令。
- MongoDB Enterprise
: The subscription-based, self-managed version of MongoDB:MongoDB的基于订阅的自我管理版本 - MongoDB Community
: The source-available, free-to-use, and self-managed version of MongoDB:MongoDB的源代码可用、免费使用和自我管理版本
Syntax语法
The insertMany() method has the following syntax:insertMany()方法具有以下语法:
db.collection.insertMany(
[ <document 1> , <document 2>, ... ],
{
writeConcern: <document>,
ordered: <boolean>
}
)
document | ||
writeConcern |
| |
ordered | mongod instance should perform an ordered or unordered insert. mongod实例应执行有序还是无序插入。true.true。 |
Behaviors行为
Given an array of documents, 给定一个文档数组,insertMany() inserts each document in the array into the collection. There is no limit to the number of documents you can specify in the array.insertMany()将数组中的每个文档插入到集合中。您可以在数组中指定的文档数量没有限制。
Execution of Operations操作执行
By default, documents are inserted in the order they are provided.默认情况下,文档按照提供的顺序插入。
If如果orderedis set totrueand an insert fails, the server does not continue inserting records.ordered设置为true并且插入失败,则服务器不会继续插入记录。If如果orderedis set tofalseand an insert fails, the server continues inserting records. Documents may be reordered bymongodto increase performance.ordered设置为false并且插入失败,服务器将继续插入记录。mongod可能会重新排序文档以提高性能。Applications should not depend on ordering of inserts if using an unordered如果使用无序insertMany().insertMany(),应用程序不应依赖于插入的顺序。
Executing an 在分片集合上执行有序操作列表通常比执行无序列表慢,因为使用有序列表,每个操作都必须等待前一个操作完成。ordered list of operations on a sharded collection will generally be slower than executing an unordered list since with an ordered list, each operation must wait for the previous operation to finish.
Batching批处理
The driver batches documents specified in the 驱动程序根据insertMany() array according to the maxWriteBatchSize, which is 100,000 and cannot be modified. maxWriteBatchSize对insertMany()数组中指定的文档进行批处理,该值为100000,不能修改。For example, if the 例如,如果insertMany() operation contains 250,000 documents, the driver creates three batches: two with 100,000 documents and one with 50,000 documents.insertMany()操作包含250000个文档,驱动程序将创建三个批处理:两个包含100000个文档,一个包含50000个文档。
Note
The driver only performs batching when using the high-level API. 驱动程序仅在使用高级API时执行批处理。If you use 如果直接使用db.runCommand() directly (for example, when writing a driver), MongoDB throws an error when attempting to execute a write batch that exceeds the maxWriteBatchSize limit.db.runCommand()(例如,在编写驱动程序时),MongoDB在尝试执行超过maxWriteBatchSize限制的写批时会抛出错误。
If the error report for a single batch grows too large, MongoDB truncates all remaining error messages. If there are at least two error messages with size greater than 如果单个批的错误报告太大,MongoDB会截断所有剩余的错误消息。如果至少有两条大小大于1MB的错误消息,则这些消息将被截断。1MB, those messages are truncated.
The sizes and grouping mechanics are internal performance details and are subject to change in future versions.尺寸和分组机制是内部性能细节,在未来的版本中可能会发生变化。
Collection Creation集合创建
If the collection does not exist, then 如果集合不存在,则insertMany() creates the collection on successful write.insertMany()在成功写入时创建集合。
_id Field字段
If the document does not specify an _id field, then 如果文档没有指定mongod adds the _id field and assign a unique ObjectId() for the document. _id字段,则mongod会添加_id字段并为文档分配一个唯一的ObjectId()。Most drivers create an ObjectId and insert the 大多数驱动程序创建一个_id field, but the mongod will create and populate the _id if the driver or application does not.ObjectId并插入_id字段,但如果驱动程序或应用程序没有创建并填充_id,mongod将创建并填充该_id。
If the document contains an 如果文档包含_id field, the _id value must be unique within the collection to avoid duplicate key error._id字段,则_id值在集合中必须是唯一的,以避免重复键错误。
Explainability可解释性
insertMany() is not compatible with 与db.collection.explain().db.collection.explain()不兼容。
Error Handling错误处理
Inserts throw a 插入会抛出BulkWriteError exception.BulkWriteError异常。
Excluding write concern errors, ordered operations stop after an error, while unordered operations continue to process any remaining write operations in the queue.除写入关注错误外,有序操作在错误发生后停止,而无序操作继续处理队列中剩余的任何写操作。
Write concern errors are displayed in the writeConcernErrors field, while all other errors are displayed in the writeErrors field. writeConcernErrors字段中显示写入关注错误,而所有其他错误都显示在writeErrors字段。If an error is encountered, the number of successful write operations are displayed instead of a list of inserted _ids. 如果遇到错误,将显示成功写入操作的次数,而不是插入的ID列表。Ordered operations display the single error encountered while unordered operations display each error in an array.有序操作显示遇到的单个错误,而无序操作显示数组中的每个错误。
Schema Validation Errors架构验证错误
If your collection uses schema validation and has 如果集合使用模式验证,并且将validationAction set to error, inserting an invalid document with db.collection.insertMany() throws a writeError. validationAction设置为error,则使用db.collection.insertMany()插入无效文档会抛出writeError。Documents that precede the invalid document in the documents array are written to the collection. documents数组中位于无效文档之前的文档将写入集合。The value of the ordered field determines if the remaining valid documents are inserted.ordered字段的值决定是否插入剩余的有效文档。
Transactions事务
db.collection.insertMany() can be used inside distributed transactions.可以在分布式事务中使用。
Important
In most cases, a distributed transaction incurs a greater performance cost over single document writes, and the availability of distributed transactions should not be a replacement for effective schema design. 在大多数情况下,分布式事务比单文档写入产生更大的性能成本,分布式事务的可用性不应取代有效的模式设计。For many scenarios, the denormalized data model (embedded documents and arrays) will continue to be optimal for your data and use cases. 对于许多场景,非规范化数据模型(嵌入式文档和数组)将继续是数据和用例的最佳选择。That is, for many scenarios, modeling your data appropriately will minimize the need for distributed transactions.也就是说,对于许多场景,适当地对数据进行建模将最大限度地减少对分布式事务的需求。
For additional transactions usage considerations (such as runtime limit and oplog size limit), see also Production Considerations.有关其他事务使用注意事项(如运行时限制和oplog大小限制),另请参阅生产注意事项。
Collection Creation in Transactions事务中的集合创建
You can create collections and indexes inside a distributed transaction if the transaction is not a cross-shard write transaction.如果分布式事务不是跨分片写入事务,则可以在该事务内创建集合和索引。
If you specify an insert on a non-existing collection in a transaction, MongoDB creates the collection implicitly.如果在事务中不存在的集合上指定插入,MongoDB将隐式创建该集合。
Write Concerns and Transactions撰写入关注和事务
Do not explicitly set the write concern for the operation if run in a transaction. To use write concern with transactions, see Transactions and Write Concern.如果在事务中运行,则不要显式设置操作的写入关注。要对事务使用写关注,请参阅事务和写关注。
Performance Consideration for Random Data随机数据的性能考虑
If an operation inserts a large amount of random data (for example, hashed indexes) on an indexed field, insert performance may decrease. Bulk inserts of random data create random index entries, which increase the size of the index. 如果操作在索引字段上插入大量随机数据(例如哈希索引),插入性能可能会降低。随机数据的批量插入会创建随机索引条目,从而增加索引的大小。If the index reaches the size that requires each random insert to access a different index entry, the inserts result in a high rate of WiredTiger cache eviction and replacement. When this happens, the index is no longer fully in cache and is updated on disk, which decreases performance.如果索引达到需要每次随机插入访问不同索引条目的大小,则插入会导致WiredTiger缓存的高驱逐和替换率。当这种情况发生时,索引不再完全在缓存中,而是在磁盘上更新,这会降低性能。
To improve the performance of bulk inserts of random data on indexed fields, you can either:为了提高在索引字段上批量插入随机数据的性能,您可以:
Drop the index, then recreate it after you insert the random data.删除索引,然后在插入随机数据后重新创建。Insert the data into an empty unindexed collection.将数据插入到未编制索引的空集合中。
Creating the index after the bulk insert sorts the data in memory and performs an ordered insert on all indexes.在批量插入后创建索引,对内存中的数据进行排序,并对所有索引执行有序插入。
Oplog Entries操作日志条目
MongoDB consolidates operations that insert multiple documents, such as MongoDB将插入多个文档的操作(如db.collection.insertMany() and multiple insertOne commands within a db.collection.bulkWrite() operation, into a single entry in the oplog. db.collection.insertMany()和db.collection.bulkWrite()操作中的多个insertOne命令)合并到oplog中的单个条目中。If the operation fails, the operation does not add an entry on the oplog.如果操作失败,则操作不会在oplog上添加条目。
Examples示例
The following examples insert documents into the 以下示例将文档插入到products collection.products集合中。
Insert Several Document without Specifying an _id Field插入多个文档而不指定_id字段
_id FieldThe following example uses 以下示例使用db.collection.insertMany() to insert documents that do not contain the _id field:db.collection.insertMany()插入不包含_id字段的文档:
try {
db.products.insertMany( [
{ item: "card", qty: 15 },
{ item: "envelope", qty: 20 },
{ item: "stamps" , qty: 30 }
] );
} catch (e) {
print (e);
}
The operation returns the following document:该操作返回以下文档:
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("562a94d381cb9f1cd6eb0e1a"),
ObjectId("562a94d381cb9f1cd6eb0e1b"),
ObjectId("562a94d381cb9f1cd6eb0e1c")
]
}
Because the documents did not include 由于文档不包含_id, mongod creates and adds the _id field for each document and assigns it a unique ObjectId() value._id,mongod为每个文档创建并添加_id字段,并为其分配一个唯一的ObjectId()值。
The ObjectId values are specific to the machine and time when the operation is run. As such, your values may differ from those in the example.ObjectId值特定于运行操作的机器和时间。因此,值可能与示例中的值不同。
Insert Several Document Specifying an _id Field插入多个指定_id字段的文档
_id FieldThe following example/operation uses 以下示例/操作使用insertMany() to insert documents that include the _id field. The value of _id must be unique within the collection to avoid a duplicate key error.insertMany()插入包含_id字段的文档。_id的值在集合中必须是唯一的,以避免重复键错误。
try {
db.products.insertMany( [
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 12, item: "medium box", qty: 30 }
] );
} catch (e) {
print (e);
}
The operation returns the following document:该操作返回以下文档:
{ "acknowledged" : true, "insertedIds" : [ 10, 11, 12 ] }
Inserting a duplicate value for any key that is part of a unique index, such as 为唯一索引中的任何键(如_id, throws an exception. _id)插入重复值会引发异常。The following attempts to insert a document with a 以下尝试插入一个_id value that already exists:_id值已存在的文档:
try {
db.products.insertMany( [
{ _id: 13, item: "envelopes", qty: 60 },
{ _id: 13, item: "stamps", qty: 110 },
{ _id: 14, item: "packing tape", qty: 38 }
] );
} catch (e) {
print (e);
}
Since 由于_id: 13 already exists, the following exception is thrown:_id: 13已经存在,因此抛出以下异常:
BulkWriteError({
"writeErrors" : [
{
"index" : 0,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: inventory.products index: _id_ dup key: { : 13.0 }",
"op" : {
"_id" : 13,
"item" : "stamps",
"qty" : 110
}
}
],
"writeConcernErrors" : [ ],
"nInserted" : 1,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
Note that one document was inserted: The first document of 请注意,插入了一个文档:_id: 13 will insert successfully, but the second insert will fail. This will also stop additional documents left in the queue from being inserted._id: 13的第一个文档将成功插入,但第二个插入将失败。这也将阻止插入队列中剩余的其他文档。
With 如果ordered to false, the insert operation would continue with any remaining documents.ordered为false,则插入操作将继续处理任何剩余的文档。
Unordered Inserts无序插入
The following attempts to insert multiple documents with 以下尝试插入多个具有_id field and ordered: false. The array of documents contains two documents with duplicate _id fields._id字段和ordered: false的文档。文档数组包含两个具有重复_id字段的文档。
try {
db.products.insertMany( [
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 11, item: "medium box", qty: 30 },
{ _id: 12, item: "envelope", qty: 100},
{ _id: 13, item: "stamps", qty: 125 },
{ _id: 13, item: "tape", qty: 20},
{ _id: 14, item: "bubble wrap", qty: 30}
], { ordered: false } );
} catch (e) {
print (e);
}
The operation throws the following exception:该操作引发以下异常:
BulkWriteError({
"writeErrors" : [
{
"index" : 2,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: inventory.products index: _id_ dup key: { : 11.0 }",
"op" : {
"_id" : 11,
"item" : "medium box",
"qty" : 30
}
},
{
"index" : 5,
"code" : 11000,
"errmsg" : "E11000 duplicate key error collection: inventory.products index: _id_ dup key: { : 13.0 }",
"op" : {
"_id" : 13,
"item" : "tape",
"qty" : 20
}
}
],
"writeConcernErrors" : [ ],
"nInserted" : 5,
"nUpserted" : 0,
"nMatched" : 0,
"nModified" : 0,
"nRemoved" : 0,
"upserted" : [ ]
})
While the document with 虽然由于item: "medium box" and item: "tape" failed to insert due to duplicate _id values, nInserted shows that the remaining 5 documents were inserted._id值重复,带有item: "medium box"和item: "tape"的文档无法插入,但nInserted显示其余5个文档已插入。
Using Write Concern使用写关注
Given a three member replica set, the following operation specifies a 给定一个由三个成员组成的副本集,以下操作指定w of majority and wtimeout of 100:w为majority,wtimeout为100:
try {
db.products.insertMany(
[
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 12, item: "medium box", qty: 30 }
],
{ w: "majority", wtimeout: 100 }
);
} catch (e) {
print (e);
}
If the primary and at least one secondary acknowledge each write operation within 100 milliseconds, it returns:如果主服务器和至少一个辅助服务器在100毫秒内确认每个写入操作,则返回:
{
"acknowledged" : true,
"insertedIds" : [
ObjectId("562a94d381cb9f1cd6eb0e1a"),
ObjectId("562a94d381cb9f1cd6eb0e1b"),
ObjectId("562a94d381cb9f1cd6eb0e1c")
]
}
If the total time required for all required nodes in the replica set to acknowledge the write operation is greater than 如果副本集中所有必需节点确认写入操作所需的总时间大于wtimeout, the following writeConcernError is displayed when the wtimeout period has passed.wtimeout,则当超过wtimeout时间段时,将显示以下writeConcernError。
This operation returns:此操作返回:
WriteConcernError({
"code" : 64,
"errmsg" : "waiting for replication timed out",
"errInfo" : {
"wtimeout" : true,
"writeConcern" : {
"w" : "majority",
"wtimeout" : 100,
"provenance" : "getLastErrorDefaults"
}
}
})