Hello,
As i can see from the explain plan according to my knowledge you have a bottleneck in |* 5 | TABLE ACCESS FULL | TIBEX_QUOTE | 21056 | 349K| 1874 (1)| 00:00:23 |
as you have the process doing a full table scan.
I think you also need to see if there are any wait operations becuase you are doing alot off sort operations.i think you need to check on the pga_aggregrate_target.

Also you need to gather statoistics regularly on those objects.
Kind regards
mohamed
oracle DBA

I agree Sb but i think you can stop this behavior that is why i asked him to do it or check it according to the database time window.You can also add coments or correct me sb.lets share more on this issue as i am interested in the performance tuning area.
Kind regards
Mohamed
Oracle DBA

If you look at both explain plans closely, you will see they are identical. Somehow he doesn't show all, and he managed to change the width of his lines, so his second explain plan wraps to make it more difficult for those helping him.
As the plans are identical (the plan hash value is identical too), your suggestion doesn't hold.
What is striking is at the second run his number of consistent gets and physical reads is way higher.
Is it possible he is not running this in a sandbox, but in a live environment.
Also his questions shows he 'thinks' he knows the solution, as he 'assumes' the instant client is the culprit.
As far as I know the instant client has the jdbc thin client only, and doesn't use oci, so this would be slower by design.
In sqldeveloper you run the jdbc thin driver by default, but you can enable the thick driver with oci, the thin driver is so slow you want to switch to Toad immediately.
Toad is using OCI.
So if he thinks it is the driver, yes, as the driver is slower by design, he should stop using the driver.
But it is not the driver.
He needs to run both variants with proper tracing, so we can know 'what it is waiting for' the second time, and what causes IO. Might be recursive calls.

I believe that Sybrand's comments are on target. The differences in consistent gets are physical reads between the two executions are significant.

There is a chance that AUTOTRACE is showing the wrong execution plan. There is also the possibility of a "cold" cache during the first execution, while the most of the blocks were already in the buffer cache during the second execution. Please post the output of the following script when run from both environments:

ALTER SESSION SET STATISTICS_LEVEL='TYPICAL';
SET AUTOTRACE OFF
SET SERVEROUTPUT OFF
SET TIMING OFF
SET LINESIZE 160
SET PAGESIZE 1000
select /*+ GATHER_PLAN_STATISTICS */ mod(lastinstmessagesequence, 1000000000) LastInstIDSeqNo from tibex_msgseqbyuseralias where useralias='2221';
select /*+ GATHER_PLAN_STATISTICS */ mod(lastinstmessagesequence, 1000000000) LastInstIDSeqNo from tibex_msgseqbyuseralias where useralias='2221';
SELECT
*
FROM
TABLE(DBMS_XPLAN.DISPLAY_CURSOR('NULL,NULL,'ALLSTATS LAST'));

The execution plan displayed by the above will be the actual execution plan used by the query, including both the optimizer's estimated cardinality for each operation and the actual number of rows returned. The query is executed twice to limit the influence of a "cold" cache.

Notice the presence of :"SYS_B_0" in the output - that is a sign of system generated bind variables as a result of setting CURSOR_SHARING to either FORCE or SIMILAR. Bind peeking takes effect during the initial hard parse (assuming 9i and above) so the SQL statement is optimized just as if a constant were still present in the SQL statement. Why is this important, after all the same plan hash value of 1955857846 is displayed for both executions? Notice for the client-side execution the child number is listed as 0, and for the server-side execution the child number is listed as 1 - this indicates that there was a hard parse when the SQL statement was executed on the server-side. But also notice that the SQL_ID changed between the two executions - that should not happen if you used the same script on both the client and server sides (unless maybe the default characterset is different for the two sides). This leads me to believe that the client-side and server-side may be using entirely different execution plans when you noticed performance differences. You can pull the existing execution plans for all child cursors for these SQL statements from the library cache by replacing the first NULL in the DBMS_XPLAN.DISPLAY_CURSOR call with the SQL_IDs:

We might be reaching the end of potential analysis for DBMS_XPLAN - we do not know for certain what the generated SQL_ID is when the program executes client-side and server-side. What might be interesting in what you posted is the following:

SQL_ID bpxr7axhxaqvy, child number 2

Note that this is child number 2 - previously we only saw child number 0 and 1 in your output. The execution plan for this child number shows a value of 1 in the "A-Rows" column for the top row, while that row/column combination for child number 0 and 1 is 0. The "A-Rows" column will only be populated when the STATISTICS_LEVEL parameter is set to ALL at the session or instance-wide level, or when the GATHER_PLAN_STATISTICS hint is specified. In this case the hint was specified. Setting the STATISTICS_LEVEL parameter to ALL will impact the execution time, especially in 10g R2 when nested loop operations appear in the execution plan. If you used the script that I provided, the SQL statement would have been executed with the STATISTICS_LEVEL parameter set to TYPICAL, but it might be set to ALL in the instant client's session by default (this is something we will need to check).

Quick question: if you change the script to specify SET TIMING ON, do you observe a similar time on the server and client side for the script?

If you still see performance differences between the client-side and server-side when 0 rows (or 1 row) are returned, consider enabling an extended SQL trace (event 10046 at level 8 or 12). Refer to this two part blog series:
http://hoopercharles.wordpress.com/2009/12/01/10046-extended-sql-trace-interpretation/

If you enable a 10046 trace, after the problematic (slow performing) SQL statement, execute a simple SQL statement such as the following to make certain that the execution plan (STAT lines) for the previous SQL statement are written to the trace file (this may not happen if the SQL statement has been executed 3 times by the session). A simple SQL statement might look like this:

SELECT SYSDATE FROM DUAL;

In 10g the SQL_ID for the SQL statement is not written to the trace file, as is the case for 11g. The hash value (hv) for the SQL statement is written to the trace file, and you can use that to query V$SQL to retrieve the matching SQL_ID so that you can then call DBMS_XPLAN to see a nicely formatted version of the actual execution plan.

We are interested in the actual execution plan and the wait events. If you process the 10046 trace file using tkprof (do NOT use the EXPLAIN parameter), that output should contain the information that we need to help you determine what is different - why the instant client is slow while the server side is fast.