If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

I was calculating ratios and buffer hit cache ratio was at 65%. We were running queries on the system, more than normal. Increased the db blocks to 20000 from 10000 and increased shared pool to 256MB from 128MB. Bounced the database. Started our queries again. Now the buffer hit cache ratio went down as far as 55% and is hovering around 60%. Why am I not seeing better ratios than before though I have doubled.

Are there any other parameters, I have to work on when I change these two..

buffer hit cache ratio - maybe because your users aren't finding the blocks they need in memory, except for about 60% of the time? Increasing size is but one way to improve the ratio.

db_block_what? You can change db_block_size all you want after the database is created, but it won't do anything. That value is set in concrete once you've created the database. If you mean db_block_buffers, then size of the cache is db_block_size times db_block_buffers.

Increases to the database buffer cache are not a direct relationship. Doubling the cache does not mean your ratio doubles, so you have something else going on other than the size of the cache given that your ratio is still the same. You increase the cache size until there isn't an increase in ratio performance, and then you start on other factors.

What are your users doing? Flushing buffers out of memory? You can use multiple buffer pools, cache tables in memory, and use indexes.

1) Increasing the size of the shared pool parameter will not increase the buffer cache hit ratio, but may decrease the shared pool performance depending on the style of application. I read an article by Thomas Kyte that claimed you shouldn't have a shared pool size greater than 75M if bind variables are not used by your application. I hate rules of thumb like this but TK is the man!

2) Bouncing your instance clears the buffer cache. Depending on the usage of your instance it may take a long time to repopulate the cache, hence the low hit ratio.

3) Alot of the top guys frown on the use of hit ratios during tuning, suggesting it is quite a dated method. A better method to consider is the YAPP-Method which relies on monitoring wait states. These links will get you started:

The logic is that bad performance is caused when the kernel is waiting for resources. The best way to identify these resource bottlenecks is to look at wait states in the kernel. Eliminating/reducing these wait states is the most accurate way to tune the instance. Once you've eliminated/reduced as many wait states as you can you've tuned the instance as best you can with the resources available.

1) Increasing the size of the shared pool parameter will not increase the buffer cache hit ratio, but may decrease the shared pool performance depending on the style of application. I read an article by Thomas Kyte that claimed you shouldn't have a shared pool size greater than 75M if bind variables are not used by your application. I hate rules of thumb like this but TK is the man!

2) Bouncing your instance clears the buffer cache. Depending on the usage of your instance it may take a long time to repopulate the cache, hence the low hit ratio.

3) Alot of the top guys frown on the use of hit ratios during tuning, suggesting it is quite a dated method. A better method to consider is the YAPP-Method which relies on monitoring wait states. These links will get you started:

The logic is that bad performance is caused when the kernel is waiting for resources. The best way to identify these resource bottlenecks is to look at wait states in the kernel. Eliminating/reducing these wait states is the most accurate way to tune the instance. Once you've eliminated/reduced as many wait states as you can you've tuned the instance as best you can with the resources available.

Cheers

Sir,

I have found your suggestions rightly pointing out to the solutions for the same problem that I have.. Sir, I shall be happy if you could come again on the term 'Bouncing your instance clears the buffer Cache' ... Can u come again on this term and more clearly...

It means that all data blocks in the data buffer cache will be flushed since the RAM is violative.

After the DB mounting and opening the data buffer cache has to "warm up" - read blocks from the disks and fill the MRU and LRU lists, etc.. So that will take time and in the begining your buffer cache hit ratio will be low as you are going to make more reads and have less hits.

Originally posted by clio_usa It means that all data blocks in the data buffer cache will be flushed since the RAM is violative.

After the DB mounting and opening the data buffer cache has to "warm up" - read blocks from the disks and fill the MRU and LRU lists, etc.. So that will take time and in the begining your buffer cache hit ratio will be low as you are going to make more reads and have less hits.

Yep. Bouncing the instance means doing a shutdown then startup of the instance. The instance goes down then up, like a ball when you bounce it, hence the term.

You should try and reduce the number of times you bounce the instance. Thomas Kyte reckons the only time the instance should be bounced is if the server needs rebooting, or you need to change a parameter that can't be altered using the ALTER SYSTEM command. This means get rid of cold backups.

Originally posted by TimHall Thomas Kyte reckons the only time the instance should be bounced is if the server needs rebooting, or you need to change a parameter that can't be altered using the ALTER SYSTEM command. This means get rid of cold backups.

I totally agree with that. Another situation when you need to bounce is when having a severe error in the database.