I have a Postgresql 10 database with about 300k tables in 23k schemas. I am trying to upgrade to Postgresql 13 using pg_upgradecluster. This is failing while attempting dump all the schemas:
pg_dump: error: query failed: ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE "a45119740"."activity_hrc" IN ACCESS SHARE MODE
Is setting max_locks_per_transaction to 300k something that can be done? I haven't be able to find anything explaining how much shared memory this might need. The machine has 64G of RAM.
(I understand that I need to change my db design .. I have been backing up one schema at a time until now so wasn't aware of this problem)
Your lock table needs to be big enough to lock all your tables and metadata tables.
Since the lock table has room enough for
max_locks_per_transaction * (max_connections + max_prepared_transactions)
locks, all you need to do is set max_locks_per_transaction big enough that the lock table can hold the locks your pg_dump and the other workload needs.
To answer the question how much space each entry in the lock table needs, that can vary based on your architecture, but in general it is sizeof(LOCK) + sizeof(LOCKTAG), which is 168 bytes on my Linux system.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With