Fun with Filters March 9, 2010

{Please note – this post has been used to test changes to layout, thus the duplicated sections}

This post is about the importance of filter statements in execution plans and how they can indicate problems that are not always apparent.

I had a problem recently with some code that had gone rogue – it had been running in a few seconds, being executed every 10 minutes or so. Now it ran until either someone killed it off or it got “snapshot too old” errors. I pretty much knew it was prompted by stats being gathered on the tables in question as we had just gathered stats on the tables in that schema.

The code was something like this (it is not too important what exactly the code was). Oh, and this is on 10.2.0.3, enterprise edition with partitioning.

select accounted, max(recorded_date),count(*)
from W_LCG_OPENING_
where accountid||DATA_SRC_COUNTRY_ID in
(
SELECT accountid||DATA_SRC_COUNTRY_ID FROM W_LCG_OPENING_
minus
SELECT accountid ||DATA_SRC_COUNTRY_ID FROM W_LCG_CLIENT_
)
and 1= 1 – this is pointless, it replaces a check of “run”=”run”
group by accountid,DATA_SRC_COUNTRY_ID

Those tables are quite large, several million rows in each. The code is basically scanning all of the rows in the table as there are no indexes to support the query and it is written in a way that, well, I would not write it.

Digging around in some monitoring tools I use I confirmed that the code has swapped plan

Version 1 comes back in 5 or 6 seconds. Version 2 does not effectively come back, which is why it records 0 rows. You can see that the Disk Gets are pretty low in version 2 (allowing that it was never left to finish) but Buffer Gets and CPU are both massively up. So much so, it exceeds where I format the numbers with comas (as, if they are THAT big, they need fixing anyway).

I looked at the plans and, though they were different, nothing jumped out at me. So I trimmed down the code and built it up section by section until I saw something significant change. This is my general mode of tuning code if nothing initially occurs to me.
As usual, I started with any scalar or inline code, in this case that SELECT…MINUS…SELECT in the where clause.

This came back in 5.22 seconds with 4468 records. Again, I can live with that, we do not need sub-second response, we need sub-minute response.

So I now added back in the group by accountid…

select accountid,count(*)
from W_LCG_OPENING_
where accountid||DATA_SRC_COUNTRY_ID in
(
SELECT accountid||DATA_SRC_COUNTRY_ID FROM W_LCG_OPENING_
minus
SELECT accountid ||DATA_SRC_COUNTRY_ID FROM W_LCG_CLIENT_
)
group by accountid

This does not come back before I have fetched and consumed a cup of tea. That is a significant change.

I was not expecting this, why would sorting under 5 thousand rows stress a database that can scan multi-million row tables in seconds? There is all that CPU being burned up and huge numbers of buffer gets, but not much in the way of disc IO, so it is probably doing something that can be done in memory a very, very large number of times.

What is the difference in plan? {Oh HELL, they look rubbish – sorry, click on the images to blow them up, I’d suggest right-click and open in a new tab, and, YES, I am working on making my blog a little more “wide and friendly”.}. Good plan first, bad plan second

{Original compressed – width=”459″ height=”122″}

{Largest size I can use – width=”800″ height=”212″}

{Natural “overflow” size – width=”866″ height=”230″}

{ was width=”460″ height=”132″ now 800*242. Orig is 839* 242}

The view step has disappeared, which was an expensive step so should help not hinder.
The “SORT GROUP BY” has appeared, which it would do as we introduced the “GROUP BY” clause.
The hash join has been replaced with a filter. In a more complex plan you might miss this replacement, you would maybe notice the hash join disappearing but a filter, well, it is checking some simple “WHERE” clause isn’t it?

Well, the hash join was joining the results coming from the two steps feeding into it, the PARTITION RANGE FULL and the VIEW (which is the in-memory construct of the SELECT..MINUS…SELECT statement).

Now the Filter is filtering the results from the PARTITION RANGE ALL with the results from the MINUS. At this point I’d like to highlight that the predicted cardinality and bytes coming back for the steps within the union have reduced by a factor of 100 from the good and bad plans. And I’ll also admit I hid some key detail in the screen shot. I am not showing the access and filter predicates.

{image is 989*246, I am trying 800*246. Original 460*114}
The above is a screen shot showing that in the new code there are no filter predicates but an access predicate, for the access to the view (ie the select…minus…select). For easier reading, the full access predicate is below, rather than in the screen shot:

“$nso_col_1″=TO_CHAR(“ACCOUNTID”)||TO_CHAR(“DATA_SRC_COUNTRY_ID”)

However, for the slow code, there are no access predicated but are filter predicates. Again, the screen shot shows that there are predicates and I show the full text below. (Screen shots are from PL/SQL developer, btw).

Basically, the filter is to run the union query for every row from the driving query, passing in the relevant filter clause as an extra predicate to each of the individual queries of the SELECT…MINUS…SELECT

Clever, but not of much help to us as the performance is awful.

I showed this to a colleague and their immediate response was “why does that not show up as a nested loop? How am I supposed to spot that the “filter” is doing so much?” There were a couple of robust Saxon words mixed in with that statement.

So, Beware Filters replacing joins and get in the habit of checking out the filter and access predicates

If you use an old-style template for showing plans {like I do, a guilty sin of mine}, or a GUI where you have not selected that the filter and access predicates be shown, you may well not get them displayed. If you use autotrace in SQL*Plus, you will though:

What did I do to fix the problem? Well, even though the code originally went bad due to stats being gathered, I could not force it back to a nice plan after an hour or two playing with gathering new stats so, in this case, I used an UNNEST hint.

I’m a little unhappy about that hint, but the code needed fixing and the hint did what I wanted it to {I think it was poor of me to only hint one of the two minus statements and I do not like resorting to hints just because I can’t work out what is “wrong” with the stats – If I work out what is wrong and still need the hint, fair enough}. However, I had to be pragmatic and get the code fixed and working in Live, so it has gone in with a hint

Advertisements

Like this:

LikeLoading...

Related

Nice example of one of those conundrums in explain plans where the plan that actually works quickly has unrealistic cost/cardinality values (unless you have a machine that really can process a hundred million rows in a few seconds).

Must say I don’t like the concatentation in the minus
What happens with a
WHERE (col_a,col_b) IN
(SELECT col_a, col_b FROM …
MINUS
SELECT col_a, col_b FROM …)

and
WHERE (col_a,col_b) IN
(SELECT col_a, col_b FROM tab_a
WHERE NOT EXISTS
(SELECT 1 from tab_b
where tab_a.col_a = tab_b.col_a
and tab_a.col_b = tab_b.col_b)
)

I SAID not to worry about the SQL too much! *Sigh*, you spotted a big potential hole in the root code which I tried to skate over. The code is checking for “numberA||numberB” matching the same combination from another record. Well, “12345||67” will match “123456||7”. I checked and the “minus” code finds 62 records. When I use my own code to better distinguish between the two columns, I find 63 records. Oh Dear. But that makes no impact on the topic of the blog post so I will ignore reality for the sake of a good story… (please do not ask me what I am going to do about the fact I know the production code is fundamentally flawed, it gets complex…)

You also pick up another interesting aspect of the example. The execution plans are getting the cardinality of the steps of the code utterly wrong in almost all real situations. I tried to get the stats on the tables to better reflect the true state as I knew there was correlation going on, but even using dynamic sampling at level 6 did not allow Oracle to get it “right”, so I had to go the hint route. When it comes down to it, the CBO is getting to be a complex, smart, but uttelry dump piece of code, trying for generic solutions to as many real-world situations as it can. It can’t. And, of it CAN, I suspect it would be too complex for any of us humans to second-guess and thus tune appropriately. I’d be out of a job then.

Bottom line is, for some code, you just need to look at it, play with it, and push it into an execution plan that works for your system consistently enough to support the business need.

Sorry 🙂
I like Wolfgang Breitling’s cardinality based approach to bad plans. If (and it it s big IF) you can get the cardinality right, generally you get the right plan.
Here it looks like, firstly, it doesn’t think the minus will exclude anything. You are probably right about that being a correlation mistake.
The second mistake is, with the hash join plan, it is coming up with a cardinality of billions which is why the SORT- GROUP BY is being costed out of contention. And to me that looks like an Oracle bug as since all the data being returned comes from one table, it shouldn’t even think of having more rows than are in that table.

Final thought, before I shut up about the SQL, is that whole subselect with the MINUS looks like another attempt to ‘deal’ with the optimizer. It could be a simple NOT EXISTS on W_LCG_CLIENT_. And hints are a lot better way to respond than making convoluted SQL. At least with a hint, anyone coming along later will immediately think “Performance/optimizer issue” rather than “what was the business logic behind all these weird joins”

I can’t resist responding, even though I don’t WANT to talk about the logical sense of this particular SQL statement or how to get the cardinalities correct – I fixed the issue of response time as requested by my customer and I was able to demonstrate for my blog what I wanted to say about Filters and Filter Predicates with this example. Everyone is happy. Spending more time on it is not justifiable…

But….

I like your (well, Wolfgang’s) approach of “if you can get the cardinality right, the CBO will almost certainly get it right”. In this case, the cardinality is utterly woefull and I am sure it is the core of the problem. The result of the minus is around 60 to 70. I did think of converting the select…minus…select into an inline query in the FROM clause and giving it a cardinality hint (I like cardinality hints as (a) they self-document what you are forcing on the CBO and it is easy for whoever comes after me to verify if is now duff advice and (b) it still lets the CBO dedide on access paths and the like). But that was more of a change than I could get away with without prompting a full code review and change control.

As for replacing the MINUS with a “where exists”, as nearly all records are excluded and there is no current indexing to help the code {one of the tables is partitioned and adding a partition index would lead to the change/code review step which, due to a deadline about to dead, is not an option} then I am of the opinion the MINUS is a pretty efficient step in this case.

I suspect I am approaching the Compulsive Tuning DIsorder event horizon on this but I also suspect I am going to find myself sitting with this piece of code one evening soon, as I won’t be able to let it lie, especially as I’ve been emailed by someone else who seems keen to dig into why the figures in the cost/cardinality estimate are so wrong.

Pragmatism and Business Sense says “fix it, move on”. The desire to known says “work it out”.

You said new Stats looked like the main cause, (which is most of the time solves 90% of the problems I faced (depending on the way I gather them)) and you said that you did not go through playing with stats because of the time issue.

My questions are for your diagnosis
Did you check stats at all apart from knowing that they were newly gathered?
Did you ever did think about gathering 10053 to see why it is going filter predicate?
Did you ever check all_tab_modifications in case your data changed again with relatively big change but not enough to be 10% to gather the stats again

Your approach solved the problem and also dirty (in my opinion all hints are kind of dirty as long as they are not due to a bug) I wonder if you had time what you you do instead of going for quick working but dirty solution ?

By the way I liked the post but I find it a bit misleading for a guy who is here to find a solution. Maybe there can be another steps which is why I asked the questions 🙂

And Painful question is last one

Why does cardinality hint needs change control but the current hint does not or needs less????

You raise several issues and I have but a few minutes before the start of a hectic week to answer!

– Yes, I did check the stats, I gave a script called “tab_sci_own.sql” (it can be found on my website) that pulls out the most significant stats. They had been collected within the last could of days but a fresh collection did not solve my issue.
– I can’t run traces on this production system as I do not have the access. Otherwise, I would have done.
– I did not check dba_tab_modifications as such but I did check the stats collection history for the table (to see how often it was getting stats gatherd, and thus it’s recent level of churn)
– If I have time, I try and identify why the stats and sql statement result in the cardinality {or whatever the problem is} that they do and, once identified, can I alter the stats gathering to get a better plan, alter the code to add the “obvious” relationship that the CBO does not see (like data correlation). If I have access and the time, I run the trace of course.
– Cardinality hint needing less change control? I must have mentioned that in passing but can’t see it in the text in a quick scan through. Bottom line though, hints need LESS change control as they do not alter the data returned (ignoring the odd exception, OK?). They still need some control of course.

Email me about the cardinality thing later in the week Coskan and I’ll come back to it.

For the cardinality hint I think I got that impression from your reply to Gary’s second comment with words

” I did think of converting the select…minus…select into an inline query in the FROM clause and giving it a cardinality hint (I like cardinality hints as (a) they self-document what you are forcing on the CBO and it is easy for whoever comes after me to verify if is now duff advice and (b) it still lets the CBO dedide on access paths and the like). But that was more of a change than I could get away with without prompting a full code review and change control.”

but you also mentioning the change in the code so I think I got you wrong. I was thinking only cardinality hint instead of current hint might have solved the problem without extra change control need.

Ahh, yes, you are right, I referred to the cardinality and testing effort in the last comment I made.

And as you say, I was meaning that changing the structure of the code and adding a cardinality hint would involve proper regression testing. Not because of the hint but because of the code change.

As hints {nearly always} do not change the data that comes back from a SQL statement {though the order could vary if you do not have an ORDER BY clause} then the functional side of the code does not need re-testing – so it is often easier to get the change into the live system. Code changes always need re-testing. They might not always get retested, but they need retesting 🙂