Tip
MongoDB also provides the MongoDB还提供了Mongo.bulkWrite() method for performing bulk write operations.Mongo.bulkWrite()方法来执行批量写入操作。
Description描述
Bulk.insert(<document>)Adds an insert operation to a bulk operations list.将插入操作添加到批量操作列表中。Bulk.insert()accepts the following parameter:接受以下参数:Parameter参数Type类型Description描述docdocument文档Document to insert. The size of the document must be less than or equal to the maximum BSON document size.要插入的文档。文档的大小必须小于或等于BSON文档的最大大小。
Compatibility兼容性
This command is available in deployments hosted in the following environments:此命令在以下环境中托管的部署中可用:
- MongoDB Atlas
: The fully managed service for MongoDB deployments in the cloud:云中MongoDB部署的完全托管服务
Note
This command is supported in all MongoDB Atlas clusters. 所有MongoDB Atlas集群都支持此命令。For information on Atlas support for all commands, see Unsupported Commands.有关Atlas支持所有命令的信息,请参阅不支持的命令。
Behavior行为
Insert Inaccuracies插入不准确之处
Even if you encounter a server error during an insert, some documents may have been inserted.即使在插入过程中遇到服务器错误,也可能插入了某些文档。
After a successful insert, the system returns 成功插入后,系统返回BulkWriteResult.insertedCount, the number of documents inserted into the collection. If the insert operation is interrupted by a replica set state change, the system may continue inserting documents. BulkWriteResult.insertedCount,即插入到集合中的文档数。如果插入操作因副本集状态更改而中断,系统可以继续插入文档。As a result, 因此,BulkWriteResult.insertedCount may report fewer documents than actually inserted.BulkWriteResult.insertedCount报告的文档可能比实际插入的文档少。
Performance Consideration for Random Data随机数据的性能考虑
If an operation inserts a large amount of random data (for example, hashed indexes) on an indexed field, insert performance may decrease. Bulk inserts of random data create random index entries, which increase the size of the index. If the index reaches the size that requires each random insert to access a different index entry, the inserts result in a high rate of WiredTiger cache eviction and replacement. When this happens, the index is no longer fully in cache and is updated on disk, which decreases performance.如果操作在索引字段上插入大量随机数据(例如哈希索引),插入性能可能会降低。随机数据的批量插入会创建随机索引条目,从而增加索引的大小。如果索引达到需要每次随机插入访问不同索引条目的大小,则插入会导致WiredTiger缓存的高驱逐和替换率。当这种情况发生时,索引不再完全在缓存中,而是在磁盘上更新,这会降低性能。
To improve the performance of bulk inserts of random data on indexed fields, you can either:为了提高在索引字段上批量插入随机数据的性能,您可以:
Drop the index, then recreate it after you insert the random data.删除索引,然后在插入随机数据后重新创建。Insert the data into an empty unindexed collection.将数据插入到未编制索引的空集合中。
Creating the index after the bulk insert sorts the data in memory and performs an ordered insert on all indexes.在批量插入后创建索引,对内存中的数据进行排序,并对所有索引执行有序插入。
Example示例
The following initializes a 下面初始化Bulk() operations builder for the items collection and adds a series of insert operations to add multiple documents:items集合的Bulk()操作生成器,并添加一系列插入操作以添加多个文档:
var bulk = db.items.initializeUnorderedBulkOp();
bulk.insert( { item: "abc123", defaultQty: 100, status: "A", points: 100 } );
bulk.insert( { item: "ijk123", defaultQty: 200, status: "A", points: 200 } );
bulk.insert( { item: "mop123", defaultQty: 0, status: "P", points: 0 } );
bulk.execute();