Tagged Questions

I'm runnig a SQLite database on nilfs (which is a log-structured filesystem). Every now and then I delete old records, so that the size of the database never surpass a certain amount.
But since the ...

We run a "manual" VACUUM ANALYZE VERBOSE on some of our larger tables after we do major DELETE/INSERT changes to them. This seems to work without issue although sometimes a table's VACUUM job will run ...

Especially analytic databases that try to optimize for queries that scan large portions of tables instead of randomly accessing specific rows. For example Redshift has a concept of a sort key that, ...

We're using Postgres 9.2 on Windows to store low-frequency timeseries data: we're inserting around 2000 rows per second every second 24 hours, 7 days a week with no downtime. There is a DELETE that ...

I have an issue with my postgres database, I'm running massive update queries (1000 per second) to a single table (with 3000 entries) and I can see that the size of that table is growing and growing ...

I currently have a table that is updated daily (easily 20%+ rows are updated, all floats/integers) and the vacuum is kicking off and running for more than 18 hours. There are 19 million rows total.
...

Can someone explain the difference between these types of VACUUM in PostgreSQL?
I read the doc but it just says that FULL locks the tables and FREEZE "freezes" the tuples. I think that's the same. Am ...

We want to run standard vacuum process on our production database which is over 100 GB and have millions of dead tuples.
Can anyone suggest what system parameters we need to keep in mind for setting ...

I'm using PostgreSQL (8.4) to store data produced by an application making frequent inserts (in the table structure described below).
The database keeps growing with time and, since the newer data is ...

I manage a big (some hundreds of gigs) database containing tables with various roles, some of them holding millions of records. Some tables only receive large number of inserts and deletes, some other ...

I have a db which has 223 tables and I have to delete some of the records from 10 of them, each has apprx. 1.5million records. Those tables are storing the temperatures every 7seconds. We have decided ...

I have heavily updated / accessed table where I store serialized java objects. They are in the table for 2-3 hours (also are being updated during that period) and then removed. Size of table is around ...

I use software which makes a big PostgreSQL database (there is a table with a million rows in it) and the developers says I should VACUUM and ANALYZE periodically. But the PostgreSQL database default ...

I have a very big table in my postgreSQL (over 300 million rows) that has never been vacuumed. Yesterday I tried a vacuum and analyze (not a full one) and it took about 7 hours to complete. Will it ...

I'm using PostgreSQL 9.1 on Ubuntu. Are scheduled VACUUM ANALYZE still recommended, or is autovacuum enough to take care of all needs?
If the answer is "it depends", then:
I have a largish database ...

I have a python script on postgresql database which is fetching records from sql server database and inserting it to postgresql DB after some filtering. this jobs is running by cron every two minutes. ...