Whatever you said is just opposite to what BlackSwan said. According to him, Buffer_gets and buffer_cache is the same thing.

As BlackSwan verified my comments

>so if my buffer_gets is high, my query is using data that is already residing in SGA (In-memory data), am I right?
above is correct.

And now you are saying that buffer_gets are from memory or disk? So it means buffer_gets and buffer_cache are different things, as I am sure about buffer_cache, that its only primary memory access (RAM). So if my application is having high buffer_gets, what should I look into? Logical and Physical reads? And what should I determine after looking into logical vs physical reads?

Please confirm once. Also whatever problem is here, is on the Administration side only, as my queries are properly find tuned.

EXECUTIONS indicates the number of times the SQL statement has been executed since it entered the shared SQL area. BUFFER_GETS indicates the collective number of logical reads issued by all executions of the statement. DISK_READS shows the collective number of physical reads issued by all executions of the statement. (A logical read is a read that resulted in a cache hit or a physical disk read. A physical read is a read that resulted in a physical disk read.)

So how can I find the how many blocks read from cache, and how many from disk.

I read a lot and confused so much now.

Also Michel, can you suggest, If my query is fine tuned (I have verified Index usage, and execution plan), what other things I should inspect, can you please provide me queries to go ahead.

There are 3 processes running simultaneously since last 3 days, and day by day their speed is decreasing. I asked DBA's and they told me that SGA is not enough, they didn't told me anything about Buffer cache, how can I give them my recommendation?

A good place to start is with the buffer cache hit ratio. I modify it to see the physical reads per hour and the logical gig per minute. On Databases with high physical reads per hour I work on caching and better indexing of the objects with high physical reads as in the query provided earlier. Databases with high logical reads I work on caching and better indexing of the objects with high logical reads as in the query provided earlier. Here in the buffer cache hit ratio you can see a database with good caching and high logical reads verses a database with very high physical reads.

John WatsonMessages: 5095Registered: January 2010 Location: Global Village

Senior Member

Manu, you keep repeating that your SQL is tuned. If is running too slowly, then it isn't tuned.
I think you may be confusing a symptom of a problem with a cause of a problem. For example, those figures for logical and physical I/O are not the cause of the problem: they are a symptom. The cause of the problem would NOT be all the I/O, the cause would be the NEED for I/O. You need to tune the statement and the segment structures so that it doesn't need to read as many blocks, and that the blocks it does need can be accessed efficiently.
Remember that no end user ever telephoned the helpdesk to complain that the buffer cache hit ratio is too low.
Someone here might be able to help, but since you have not posted the query or the AWR report or AWR SQL report or the DDL of the table(s) and indexes, that seems unlikely.

Are you able to see anything that I can rectify in the above exeuction plan. As per my opinion its optimal, as I have tested it using TOAD optimizer (many plans + 5 time execution per plan) and came to that plan with minimum cost and execution time. A large table is used 6 times, 1 time in hash join range scan and 5 time in nested loop join unique scan.

As far as I know, tuning is done on different levels (please see attached)

So now, as per my opinion, its everything fine at SQL tuning, I should do something with Instance and Server tuning.

What is the point of hinting dynamic sampling (especially with a spelling mistake) and then trying to force a full table scan? Which you don't actually get.
Your code is NOT tuned. You had better do a reality check on all the statistics and the actual and expected cardinalities, get rid of the hint, and think about your indexing and partitioning strategy.
Back to the beginning, I'm afraid.