nothing tells me I am anywhere close to reaching max limit. (in other words, no pattern where cpu utilization is running at beyond 80% ..and no huge pile up on run-queue either)

But there are occasions where some operations take longer and being a simple dba (scope strictly limited to database) I cannot seem to find a justification.

Perhaps, I can take your lead on this “A task that avoids stalling a thread can effectively take over the core.” & see if I can simulate some operation within a db, that on system-side would run continuously on a thread without letting the core to be shared by anyone else.

This isn’t my specialist subject either – when I’m stuck on CPUs etc. I talk to people like Kevin Closson and James Morle to help me out.

Basic sketch: a single chip – plugged into a single socket – can contain multiple CPU Cores; a single CPU Core may be able to operate in a multi-threaded fashion (2 threads being fairly common nowadays) – and Oracle typically reports each possible thread as a CPU. So my laptop (for example) is reported by Oracle as 1 socket, 4 Cores and 8 CPUs (because it runs two threads per core).

A multi-threaded Core is, however, only capable of doing one thing at a time. Just as your operating system can appear to be doing many things at once because of time-slicing and scheduling, your core is switching between threads to give the impression of multiple threads running simultaneously.

In the case of the core, though, the switches only take place (I believe, though I may be out of date) when a thread “stalls” – by analogy, think of an Oracle session running for while, then having to wait for a “db file sequential read”, the session “stalls”, and the O/S switches to running another session. In the case of the thread stalling, an equivalent task would be loading a page of main memory into the processor cache – the thread stalls, so a different thread starts. A task that avoids stalling a thread can effectively take over the core.

As far as your shared hardware is concerned – if by “load” you mean CPU consumption, then it is perfectly feasable that one of your databases could be using a far larger percentage of the available CORE time than the CPU statistics suggests, leading to the behaviour you’re seeing.

]]>By: Oraboyhttps://jonathanlewis.wordpress.com/2013/02/11/optimisation-2/#comment-53583
Sat, 16 Feb 2013 17:59:35 +0000http://jonathanlewis.wordpress.com/?p=10572#comment-53583>>On the other hand this machine is one of those Solaris boxes that likes to pretend that it’s got 128 CPUs when really it’s only 16 cores with 8 lightweight threads per core – you don’t want anyone running a query that uses 2 solid CPU minute on one of those boxes because it’s taking out 1/16th of your CPU availability, while reporting a load of 1/128 of your CPUs.

Johathan –
Apologies for asking system side naive question..am not a systems guy and not understanding the core vs thread difference.
Can you throw some light into the above statement?

(dealing with similar situation.. more than 1 db running on solaris box.. the overall load is always reported low..but from experience I have noticed, performance on 1 database varies depending on what is run on other database.. understand its all using same physical resources..but I am trying to get some pointers, it *is* a concern even if system is not running under max resource utilization )

The problem with the “Old Believers” is that that old beliefs don’t get unpublished on the internet, and there are plenty of people who “tune by google” who find it, and then republish it just to have something in their blogs.

]]>By: David Aldridgehttps://jonathanlewis.wordpress.com/2013/02/11/optimisation-2/#comment-53527
Thu, 14 Feb 2013 08:50:26 +0000http://jonathanlewis.wordpress.com/?p=10572#comment-53527Ah, cold backup is still with us in some circles, and it’s extraordinary to see the lack of competence that is still “out there”. A couple of years ago I saw “hot backups” in NOARCHIVELOG mode in a system about to go into production for a critical client, with a decison taken not to “take the risk” of moving to ARCHIVELOG mode before the switchover.

In fact every sort of monstrosity and perversion that Oracle experts have been railing against for the past 15 years still has its set of “Old Believers”, sad to say.

“Cold backup” – long time since I’ve done one of those ;)
Still, no matter how bizarre an experiment I come up with, someone always comes up with an example of why it might make sense occasionally.

No great significance in slipping in the ANSI – it just happened to be the case, and ANSI SQL does make it that little bit harder to generate the necessary SQL Baseline.

]]>By: Yurihttps://jonathanlewis.wordpress.com/2013/02/11/optimisation-2/#comment-53495
Tue, 12 Feb 2013 12:00:34 +0000http://jonathanlewis.wordpress.com/?p=10572#comment-53495or you know what does DBA-team monitoring and interrupt query timely to prevent them of observing you :)
]]>By: Peter Shankeyhttps://jonathanlewis.wordpress.com/2013/02/11/optimisation-2/#comment-53494
Tue, 12 Feb 2013 11:53:01 +0000http://jonathanlewis.wordpress.com/?p=10572#comment-53494It’s not that complex.. The typical person treats computers like doors .. keep banging at the door sooner or later it will open :)
]]>