Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Firebase realtime database limit for delete operations

I'm a firebase user recently diving into rtdb, and just found a limit docs explaining write limit for a single db instance, saying the quote below:

The limit on write operations per second on a single database. While not a hard limit, if you sustain more than 1,000 writes per second, your write activity may be rate-limited.

In firestore's security rules for example, delete operation is in the category of write operation, and i guess such concept would be applicable to other firebase services. So i want to exactly know if delete operation is subject to write limit for rtdb instance.

FYI, i'm planning to use the latest node js admin sdk with cloud functions to operate a huge number of deletes, using this link's method for huge number of different paths.

So, if the delete op is subject to rtdb write operation, it seems to be a critical mistake to deploy this function even if only few number of users are likely to trigger this function concurrently. And even few concurrent invocations would soon max out the per-second write limit, considering that firebase admin sdk is good at iterating those ops really quickly.

Since i have to specify the id(key) of path for each removal(-so that no nested data would be deleted unintentionally), simply deleting parent path is not applicable to this situation, and even really dangerous..

If delete op is not subject to write limit, then i also want to know if there is truly no single limit for delete operations for rtdb!! Hope this question reach to firebase gurus in the community! Comments are welcomed and appreciate! Thank you in advance [:

like image 850
tsitixe Avatar asked Nov 24 '25 07:11

tsitixe


1 Answers

A delete operation does count as a write operation. If you run 20K delete operations i.e. 20K separate .remove() operations simultaneously using Promise.all(), they all will be counted as unique operation and you'll be rate limited. Those additional delete requests over the limit will take time to succeed.

Instead if you are using a Cloud function you can create a single object including all paths to be deleted and use update() to remove all those nodes in a single write operation. Let's say you have a root node users and each user node has a points node and you want to remove it from all the users.

const remObject = {
  "user_id_1/points": null,
  "user_id_2/points": null
}

await admin.database().ref("users").update(remObject)

Although you would need to know IDs of all users, this will remove points node from all users in a single operation and hence you won't be rate limited. Another benefit of doing this would be all those nodes will be deleted for sure unlike executing individual requests where some of them may fail.


If you run different `remove()` operation for each user as shown below, then it'll count as N writes where N is number of operations.
const userIDs = []

const removeRequests = userIDs.map(u => admin.database().ref(`users/${u}/points`).remove())

await Promise.all(removeRequests)
// userIDs.length writes which will count towards that rate limit

I ran some test functions with above code and no surprise both adding and removing 20K nodes using distinct operations with Promise.all() took over 40 seconds while using a single update operation with an object took just 3.


Do note that using the single update method maybe limited by "Size of a single write request to the database" which is 16 MB for SDKs and 256 MB for REST API. In such cases, you may have to break down the object in smaller parts and use multiple update() operations.

like image 187
Dharmaraj Avatar answered Nov 26 '25 22:11

Dharmaraj



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!