I have often faced a tuning dillema when you are trying to tune a statement with respect to data getting cached in oracle memory and impacting results, for example say you want to assess advantage of using text index by comparing

now as you are tuning, and running these SQL's again and again - oracle i am sure is caching all the data in memory and both queries run in almost identical time.

we could try changing the seach parameter - but with a large chunk of the table data probaly in memory - those two also run quite fast.

Is there a way to get rid of that side impact of large chunk of table getting loaded in memory ? We have sometimes gone as far as stopping and starting oracle instance! obviously not a particularly optimal method.

I am sure there is s better way (some hint / some way to clear the memory etc), can you please share how to tackle this issue

How does elapsed time change as a percentage between the first run of any SQL & the second & subsequent runs of the SQL?
This would be a direct measure of the impact of rows already being cached.
How consistent are the elapsed times from run 3 - 10 for the same SQL?
Please post actual elapsed time results.

Keep in mind, if this is run more than a few times in production - it'll cache it there. Secondly depending on your blocks you hit, even a first run may be cached. Thirdly (sorry) there's a very reasonable chance your storage has a cache too - flushing the oracle cache wont clear that.

What I'm inclined to do in situations like this, assuming the query runs for a decent length of time (i.e. a few seconds or more) is to examine the auto trace stats because even if the blocks are cached, you can see the number of gets/sorts/etc rise/fall.

You'll never wholly eliminate caching, or if you do it wont be representative of the real world, but you can make a reasonable estimate as to if it'll be an improvement or not irrespective of caching by working out the amount of "work" the database needs to do to facilitate the different access paths.

I am reminded often of the first advice I got from the first Oracle employee I worked with; a tuning specialist who spent a month on site tuning a new Oracle system. His advice: watch LOGICAL I/O. If it goes down, then 9 times out of 10 you have a better query because it is doing less work because it is looking at less stuff. This was back in 1985. Although like any advice, it is situational and there are exceptions, and the performance world is not always black and white, the reality is this advice is still right on the money. A query that does 100 logical I/O to do workload X is doing 1/5th the work as compared to a query that does 500 logical I/O to do the same workload X. There are several ways to get the logical I/O for a query. Read about it.

Notice that you do not need to flush the buffer cache in order to measure logical I/O.

The point is that physical I/O is a real cost, but not always the best indicator of a query's performance. This is because it changes for the same query and the same data due to hardware and software logic as has been noted by others above. But logical I/O is far more reliable an indicator for work done overall.

I will leave it to others to point out the flaws in using logical I/O as the primary indicator of a query's performance profile.

I can see you were going for brevity and not attempting to list every single exception to the rule, but I think you can probably add one more piece of information without losing brevity:

SQLs with similar Table Access methods are comparable using Logical I/O, but if one query performs a lot of FULL SCANs and FAST FULL SCANs, while the other one performs a lot of INDEX RANGE SCANs and TABLE ACCESS BY INDEX ROWID then Logical I/O will be a bad indicator of performance.Ross Leishman

Indeed, your commentary is very important, given the explosion of EXADATA I see happening in the next few years. FULL TABLE SCANS and HASH JOINS will become the predominant method of working with the data and thus LOGICAL I/O may be falling out of favor as an good indicator of workload.