If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

The performance of postgresql and our web application is good on that machine, but we decided to build a dedicated database server for our production database that scales better and that we can also use for internal applications (CRM and so on).

Perfomance tests in windows show that the new box outperforms our dev machine quite a bit in CPU, HD and memory performance.

I did some EXPLAIN ANALYZE tests on queries and the results were very good, 3 to 4 times faster than our dev db.

However one thing is really throwing me off.
When I open a table with 320,000 rows in the pgadmin tool (v 1.4.0) it takes about 6 seconds on the dev server to display the result (all rows). During these 6 seconds the CPU usage jumps to 90%-100%.

When I open the same table on the new, faster, better production box, it takes 28 seconds!?! During these 28 seconds the CPU usage jumps to 30% for 1 second, and goes back to 0% for the remaining time while it is running the query.

What is going wrong here? It is my understanding that postgresql supports multi-core / cpu environments out of the box, but to me it appears that it isn't utilizing any of the 2 cpu's available. I doubt that my server is that fast that it can perform this operation in idle mode.

I played around with the shared buffers and tried out versions 8.1.3, 8.1.2, 8.1.0 with the same result.

Has anyonce experienced this kind of behaviour before?
How representative is the query performance in pgadmin?

There has been some discussion of performance issues on the PostgreSQL General mailing list. Perhaps you might look at those threads and maybe get some advice from the development team on that website.