Definition定义
$changeStreamSplitLargeEvent
New in MongoDB 7.0 (and 6.0.9).MongoDB 7.0(和6.0.9)中的新功能。
If a change stream has large events that exceed 16 MB, a 如果更改流中的大型事件超过16 MB,则返回BSONObjectTooLarge exception is returned. Starting in MongoDB 7.0 (and 6.0.9), you can use a $changeStreamSplitLargeEvent stage to split the events into smaller fragments.BSONObjectTooLarge异常。从MongoDB 7.0(和6.0.9)开始,您可以使用$changeStreamSplitRateEvent阶段将事件拆分为更小的片段。
You should only use 只有在绝对必要时,才应使用$changeStreamSplitLargeEvent when strictly necessary. For example, if your application requires full document pre- or post-images, and generates large events that exceed 16 MB, use $changeStreamSplitLargeEvent.$changeStreamSplitRateEvent。例如,如果您的应用程序需要完整的文档前或后映像,并生成超过16 MB的大型事件,请使用$changeStreamSplitLegeEvent。
Before you decide to use 在决定使用$changeStreamSplitLargeEvent, you should first try to reduce the change event size. For example:$changeStreamSplitRateEvent之前,您应该首先尝试减小更改事件的大小。例如:
Don't request document pre- or post-images unless your application requires them. This generates除非您的应用程序需要,否则不要请求文档前后映像。在更多情况下,这会生成fullDocumentandfullDocumentBeforeChangefields in more cases, which are typically the largest objects in a change event.fullDocument和fullDocumentBeforeChange字段,它们通常是更改事件中最大的对象。Use a使用$projectstage to include only the fields necessary for your application. This reduces the change event size and avoids the additional time to split large events into fragments.$project阶段仅包含应用程序所需的字段。这减少了更改事件的大小,并避免了将大型事件拆分为片段的额外时间。This allows more change events to be returned in each batch.这允许在每个批次中返回更多的更改事件。
You can only have one 您的管道中只能有一个$changeStreamSplitLargeEvent stage in your pipeline, and it must be the last stage. You can only use $changeStreamSplitLargeEvent in a $changeStream pipeline.$changeStreamSplitLegeEvent阶段,并且它必须是最后一个阶段。您只能在$changeStream管道中使用$changeStreamSplitRateEvent。
$changeStreamSplitLargeEvent syntax:语法
{
$changeStreamSplitLargeEvent: {}
}Behavior行为
$changeStreamSplitLargeEvent splits events that exceed 16 MB into fragments and returns the fragments sequentially using the change stream cursor.将超过16MB的事件拆分为片段,并使用更改流游标按顺序返回片段。
The fragments are split so that the maximum number of fields are returned in the first fragment. This ensures the event context is returned as quickly as possible.分割片段,以便在第一个片段中返回最大数量的字段。这确保了事件上下文能够尽快返回。
When the change event is split, only the size of top-level fields are used. 拆分更改事件时,只使用顶级字段的大小。$changeStreamSplitLargeEvent does not recursively process or split subdocuments. For example, if you use a $project stage to create a change event with a single field that is 20 MB in size, the event is not split and the stage returns an error.$changeStreamSplitargeEvent不会递归处理或拆分子文档。例如,如果您使用$project阶段创建一个大小为20 MB的单个字段的更改事件,则该事件不会拆分,阶段将返回错误。
Each fragment has a resume token. A stream that is resumed using a fragment's token will either:每个片段都有一个简历令牌。使用片段令牌恢复的流将:
Begin a new stream from the subsequent fragment.从后续片段开始新流。Start at the next event if resuming from the final fragment in the sequence.如果从序列中的最后一个片段继续,则从下一个事件开始。
Each fragment for an event includes a 事件的每个片段都包含一个splitEvent document:splitEvent文档:
splitEvent: {
fragment: <int>,
of: <int>
}The following table describes the fields.下表描述了这些字段。
fragment | |
of |
Examples示例
MongoDB Shell
The example scenario in this section shows the use of 本节中的示例场景显示了如何将$changeStreamSplitLargeEvent with a new collection named myCollection.$changeStreamSplitRateEvent与名为myCollection的新集合一起使用。
Create 创建myCollection and insert one document with just under 16 MB of data:myCollection并插入一个数据量不到16 MB的文档:
db.myCollection.insertOne(
{ _id: 0, largeField: "a".repeat( 16 * 1024 * 1024 - 1024 ) }
)largeField contains the repeated letter a.largeField包含重复的字母a。
Enable changeStreamPreAndPostImages for 为myCollection, which allows a change stream to retrieve a document as it was before an update (pre-image) and after an update (post-image):myCollection启用changeStreamPreAndPostImages,这允许更改流检索更新前(前映像)和更新后(后映像)的文档:
db.runCommand( {
collMod: "myCollection",
changeStreamPreAndPostImages: { enabled: true }
} )Create a change stream cursor to monitor changes to 使用myCollection using db.collection.watch():db.collection.watch()创建一个更改流游标来监视对myCollection的更改:
myChangeStreamCursor = db.myCollection.watch(
[ { $changeStreamSplitLargeEvent: {} } ],
{ fullDocument: "required", fullDocumentBeforeChange: "required" }
)For the change stream event:对于变更流事件:
fullDocument: "required"includes the document post-image.包括文档后映像。fullDocumentBeforeChange: "required"includes the document pre-image.包括文档前映像。
For details, see 有关详细信息,请参阅$changeStream.$changeStream。
Update the document in 更新myCollection, which also produces a change stream event with the document pre- and post-images:myCollection中的文档,这也会生成一个包含文档前后映像的更改流事件:
db.myCollection.updateOne(
{ _id: 0 },
{ $set: { largeField: "b".repeat( 16 * 1024 * 1024 - 1024 ) } }
)largeField now contains the repeated letter 现在包含重复的字母b.b。
Retrieve the fragments from 使用myChangeStreamCursor using the next() method and store the fragments in objects named firstFragment, secondFragment, and thirdFragment:next()方法从myChangeStreamCursor检索片段,并将片段存储在名为firstFragment、secondFragment和thirdFragment的对象中:
const firstFragment = myChangeStreamCursor.next()
const secondFragment = myChangeStreamCursor.next()
const thirdFragment = myChangeStreamCursor.next()Show 显示firstFragment.splitEvent:firstFragment.splitEvent:
firstFragment.splitEventOutput with the fragment details:输出片段详细信息:
splitEvent: { fragment: 1, of: 3 }Similarly, 同样,secondFragment.splitEvent and thirdFragment.splitEvent return:secondFragment.splitEvent和thirdFragment.splitEvent返回:
splitEvent: { fragment: 2, of: 3 }
splitEvent: { fragment: 3, of: 3 }To examine the object keys for 要检查firstFragment:firstFragment的对象键,请执行以下操作:
Object.keys( firstFragment )Output:输出:
[
'_id',
'splitEvent',
'wallTime',
'clusterTime',
'operationType',
'documentKey',
'ns',
'fullDocument'
]To examine the size in bytes for 要检查firstFragment.fullDocument:firstFragment.fullDocument的字节大小:
bsonsize( firstFragment.fullDocument )Output:输出:
16776223secondFragment contains the fullDocumentBeforeChange pre-image, which is approximately 16 MB in size. The following example shows the object keys for secondFragment:secondFragment包含完整的DocumentBeforeChange前映像,大小约为16 MB。以下示例显示了secondFragment的对象键:
Object.keys( secondFragment )Output:输出:
[ '_id', 'splitEvent', 'fullDocumentBeforeChange' ]thirdFragment contains the updateDescription field, which is approximately 16 MB in size. The following example shows the object keys for thirdFragment:thirdFragment包含updateDescription字段,其大小约为16MB。以下示例显示了thirdFragment的对象键:
Object.keys( thirdFragment )Output:输出:
[ '_id', 'splitEvent', 'updateDescription' ]C#
The C# examples on this page use the 本页上的C#示例使用Atlas示例数据集中的sample_mflix database from the Atlas sample datasets. To learn how to create a free MongoDB Atlas cluster and load the sample datasets, see Get Started in the MongoDB .NET/C# Driver documentation.sample_mflix数据库。要了解如何创建免费的MongoDB Atlas集群并加载示例数据集,请参阅MongoDB .NET/C#驱动程序文档中的入门。
The following 以下Movie class models the documents in the sample_mflix.movies collection:Movie类对sample_mflix.movies集合中的文档进行建模:
public class Movie
{
public ObjectId Id { get; set; }
public int Runtime { get; set; }
public string Title { get; set; }
public string Rated { get; set; }
public List<string> Genres { get; set; }
public string Plot { get; set; }
public ImdbData Imdb { get; set; }
public int Year { get; set; }
public int Index { get; set; }
public string[] Comments { get; set; }
[]
public DateTime LastUpdated { get; set; }
}
Note
ConventionPack for Pascal CasePascal案例的约定包
The C# classes on this page use Pascal case for their property names, but the field names in the MongoDB collection use camel case. To account for this difference, you can use the following code to register a 此页面上的C#类使用Pascal大小写作为其属性名,但MongoDB集合中的字段名使用驼峰大小写。为了解释这种差异,您可以在应用程序启动时使用以下代码注册ConventionPack when your application starts:ConventionPack:
var camelCaseConvention = new ConventionPack { new CamelCaseElementNameConvention() };
ConventionRegistry.Register("CamelCase", camelCaseConvention, type => true);To use the MongoDB .NET/C# driver to add a 要使用MongoDB .NET/C#驱动程序将$changeStreamSplitLargeEvent stage to an aggregation pipeline, call the ChangeStreamSplitLargeEvent() method on a PipelineDefinition object.$changeStreamSplitAllegeEvent阶段添加到聚合管道中,请在PipelineDefinition对象上调用ChangeStreamSplitLargeEvent()方法。
The following example creates a pipeline stage that splits events exceeding 16 MB into fragments and returns them sequentially in a change stream cursor:以下示例创建了一个管道阶段,该阶段将超过16 MB的事件拆分为片段,并在更改流游标中顺序返回它们:
var pipeline = new EmptyPipelineDefinition<ChangeStreamDocument<Movie>>()
.ChangeStreamSplitLargeEvent();Node.js
You can use both the 您可以使用watch() method and the aggregate() method to execute a $changeStreamSplitLargeEvent operation. watch()方法和aggregate()方法来执行$changeStreamSplitargeEvent操作。当您将聚合管道传递给MongoDB $changeStreamSplitLargeEvent returns a ChangeStreamCursor when you pass the aggregation pipeline to the watch() method on a MongoDB Collection object. Collection对象上的watch()方法时,$changeStreamSplitRateEvent返回一个ChangeStreamCursor。当您将聚合管道传递给$changeStreamSplitLargeEvent returns an AggregationCursor when you pass the aggregation pipeline to the aggregate() method.aggregate()方法时,$changeStreamSplitRateEvent返回AggregationCursor。
Important
$changeStreamSplitLargeEvent Resumability可恢复性
If you pass a change stream to the 如果将更改流传递给aggregate() method, the change stream can not resume. aggregate()方法,则更改流无法恢复。A change stream only resumes if you pass it to the 只有当你将更改流传递给watch() method. watch()方法时,它才会恢复。To learn more about resumability, see Resume a Change Stream.要了解有关可恢复性的更多信息,请参阅恢复更改流。
The following example splits events exceeding 16 MB into fragments and returns them sequentially in a 以下示例将超过16 MB的事件拆分为片段,并在ChangeStreamCursor:ChangeStreamCursor中按顺序返回它们:
const pipeline = [{ changeStreamSplitLargeEvent: {} }];
const changeStream = collection.watch(pipeline);
return changeStream;Learn More了解更多
For more information on change stream notifications, see Change Events.有关更改流通知的更多信息,请参阅更改事件。
To learn more about related pipeline stages, see the 要了解有关相关管道阶段的更多信息,请参阅$changeStream guide.$changeStream指南。