I have some scripts which update, mongoDb records which look like this :
{ "_id" : "c12345", "arr" : [
{
"i" : 270099850,
"a" : 772,
},
{
"i" : 286855630,
"a" : 622,
}
] }
The scripts append elements in the "arr" array of the object,using "pushAll" which works fine and is very fast.
My requirement: 1. Keep modifying these objects, but process them once the size of arr exceeds 1000.
Current implementation: 1. Script A takes some data from somewhere and finds the object in another collection using "_id" field, and appends that data into "arr" array.
Current bottlenecks: 1. I want the updating script to run very fast. Upserts are fast, however the find and modifying operations are slower for each record.
Ideas in mind: 1. Instead of processing EXCEEDED items within the scripts, set a bool flag in the object, and process it using a seperate Data Cleaner script. ( but this also requires me to FIND the object before doing UPSERT ).
Any other clean and efficient ideas you can suggest ? Thanks .
In version 2.3.2 of mongo a new feature has been added. There is now a $slice that can be used to keep an array to a fixed size by decrementing it.
E.g.:
t.update( {_id:7}, { $push: { x: { $each: [ {a:{b:3}} ], $slice:-10, $sort: {'a.b':1} } } } )
This example will keep x array a length of ten elements.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With