I have a DB that has a heavy transactional load (around 2-3 million transaction inserts per day) and each transaction insert creates several temp tables before committing the transaction.
In the process, pg_attribute table has got bloated. The number of rows in the table is less than 10k as of now, but the size is still 10GB.
I ran FULL VACUUM several times on this table, but it didn't help. Even restarted the DB and done FULL VACUUM again.
What are my options?

Update
postgres=# VACUUM (VERBOSE) pg_catalog.pg_attribute;
INFO: vacuuming "pg_catalog.pg_attribute"
INFO: "pg_attribute": found 0 removable, 25 nonremovable row versions in 1 out of 49 pages
DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 650634809
There were 30 unused item pointers.
Skipped 0 pages due to buffer pins, 25 frozen pages.
0 pages are entirely empty.
CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.
VACUUM
postgres=#
SELECT schemaname, relname, n_live_tup, n_dead_tup, last_autovacuum
FROM pg_stat_all_tables
ORDER BY n_dead_tup
/ (n_live_tup
* current_setting('autovacuum_vacuum_scale_factor')::float8
+ current_setting('autovacuum_vacuum_threshold')::float8)
DESC
LIMIT 10;

Something is holding the tuples back. Since restarting the database didn't fix the problem, it must be something durable. To my knowledge, that would be either a prepared transaction, or a logical replication slot. So look in the pg_prepared_xacts and pg_replication_slots views.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With