Re: Increasing row retrieving speed via net8

From: Tanel Poder <tanel@xxxxxxxxxxxxxx>

To: grzegorzof@xxxxxxxxxx

Date: Thu, 12 Apr 2012 06:18:15 +0300

Hi Greg,
Your measured data shows that 30% of the session response time was spent
------------------------------------------------------------
---------------------
Active% | SQL_ID | EVENT | WAIT_CLASS
------------------------------------------------------------
---------------------
30% | 89km4qj1thh13 | SQL*Net more data to client | Network
3% | 89km4qj1thh13 | ON CPU | ON CPU
The above shows 30% of time is spent waiting for *SQL*Net more data to
client* waits - this can be "tuned" down somewhat by increasing the OS TCP
send buffer size, which would allow more data to be on-the-flight in this
TCP connection.
Only 3% is spent on CPU, so there's not much to do here :)
But what about the rest of the 67% of time - this was idle time for this
session, in other words the *SQL*Net message from client *client wait
event. This event can be "tuned" down by reducing the network roundtrips
you do (so that there would be less time waited for the next application
fetch command):
--------------------------------------------------------------
-------------------------------------
SID, USERNAME , TYPE,STATISTIC , DELTA,
HDELTA, HDELTA/SEC
------------------------------------------------------
---------------------------------------------
149, GG , STAT, *SQL*Net roundtrips to/from **client *,
726, 726, *145.2*
So, you were doing 145 network roundtrips per second.... each fetch (except
the first one) is a separate SQL*Net roundtrip for example.
How to reduce SQL*Net roundtrips due to fetches - well, fetch less :)
Either reduce the amount of rows returned or just increase the arraysize
even further (note that larger fetch sizes require more PGA memory so you
don't want to set the arraysize to 5000 for all your 1000 connections - but
only for these connections which move a lot of data).
I'd do these things in this order:
1) Increase arraysize for these connections/apps which move a lot of data
to the max (or until to the point you see the session idle time not get
much lower anymore while the fetching is going on)
If that's not enough then you'll also need to optimize network throughput
(that 30% of time):
2) Read: http://en.wikipedia.org/wiki/Bandwidth-delay_product
3) Read: http://www.psc.edu/networking/projects/tcptune/
4) Configure your OS (max) TCP send/receive buffer sizes to at least match
the data transfer throughput you want to achieve (given your network link
capability and roundtrip time between the endpoints)
5) After that configure the Oracle SQL*Net SDU size to max (note that a
common mistake is to spend much time tweaking Oracle SDUs while the OS TCP
buffer send/receive sizes are still too low for required throughput with
current roundtrip latency). So you should understand the stuff in 2/3/4
before tweaking #5.
By the way - It's good to see that others are also using snapper for
measuring network throughput stuff ... Back in the days when I wrote
snapper v1, it gave major help when diagnosing dblink throughput issues
over WANs...
--
*Tanel Poder*
Enkitec Europe
http://www.enkitec.com/
Advanced Oracle Troubleshooting v2.0 Seminars in May/June 2012!
http://blog.tanelpoder.com/seminar/
On Wed, Apr 11, 2012 at 9:30 PM, GG <grzegorzof@xxxxxxxxxx> wrote:
> Hi,
> I'm doing large table processing (row fetching) via Net8 , remote
> client using JDBC oracle client .
> Did some testing using snapper and two VMs with Oracle 11.2.0.2 .
> Got table T with 5M rows avg row length is 104 (its CATS from DBA_SOURCE
> and doing simple select * from t) .
> First I used BEQ (local connection :)) protocol as a reference :
>
> SQL> @snapper all 5 1 145
> Sampling SID 145 with interval 5 seconds, taking 1 snapshots...
> setting stats to all due to option = all
>
--
http://www.freelists.org/webpage/oracle-l