We are planning to create 8 HASH partitions. Looking only at PERFORMANCE would be there be any improvements, if we go for 16 or 32 partitions (maintainance and availability is not a problem in our case). There are only 2 indexes on our 350M table – one is LOCAL, another non-partitioned index.

Jon…any advice that you would give..?

– Karteek

]]>By: Jonathan Lewishttps://jonathanlewis.wordpress.com/2007/01/02/partition-count/#comment-1122
Fri, 05 Jan 2007 09:29:36 +0000http://jonathanlewis.wordpress.com/2007/01/02/partition-count/#comment-1122Alexander, Oh yes I would!
]]>By: Alexander Fatkulinhttps://jonathanlewis.wordpress.com/2007/01/02/partition-count/#comment-1121
Fri, 05 Jan 2007 09:25:05 +0000http://jonathanlewis.wordpress.com/2007/01/02/partition-count/#comment-1121Some time ago we had table with more than 90 000 segments (range + list composite partitioning). You wouldn’t believe how much time it took to parse query for the first time…
]]>By: Jonathan Lewishttps://jonathanlewis.wordpress.com/2007/01/02/partition-count/#comment-1060
Wed, 03 Jan 2007 08:29:33 +0000http://jonathanlewis.wordpress.com/2007/01/02/partition-count/#comment-1060Antonio, I think the closest thing you may find would be the scattering of “best practices” papers that Oracle’s CoE has published over the years on Metalink.

Point 2 appeared in 9i with an optional change in syntax appearing in 10g, but point 3 only appeared in 10g. There is a (probably small)price to pay for it.

The data dictionary views that reports partitions still display sequential partition numbers, which they derive by applying the analytic function rownumber() to the underlying data. So if you regularly query things like the dba_tab_partitions view, then the resource usage for some of your queries may jump significantly – especially if you have large numbers of partitions in your system.