It is where Oracle works through a SQL array. I’ve usually seen this within PL/SQL where an array variable is used within a procedure or package, being passed in containing a set of accounts or customers that are of interest and is used with a cursor. But you might also see it as the step processing the output of a pipelined function (one that returns a set of “rows”) being cast into a table. See this example by Tom Kyte. Tom also suggests that it is called a “Pickler” fetch as the data is pickled – packed and formatted. I never knew that, I just thought someone in Oracle development was having a giggle and it was “Pickled” as it was preserved from the PL/SQL side of the SQL engine. It seems that I was a little off-target with that.

{My thanks to Timur (see comments) who corrected me when I said it was a PL/SQL array. It is not, it is a SQL object}.

The CBO is not very good at looking “inside” such arrays to determine the cardinality of that step. This can cause performance issues.

Because using them involves a few steps and potentially involves grants, it is possible for them to be “temporarily removed” during testing and so their impact is not seen.

They can leak memory. I think.

I’m now going to cover each of those points in turn. If you just wanted to know what a pickler fetch is and I’ve answered that for you, I suggest you go back to whatever you were doing before you started reading this :-)

By “not very good at looking inside pickler fetches” I mean that the CBO seems to make a poor “base it on the first seen array” (V11 up) or utterly fails to identify how many records are inside the SQL array (V10 down), depending on the Oracle version. From an Oracle 10.2.0.4 system I’ve got the two following examples:

Note the cost and expected rows for the Pickler Fetch step. Or rather, the lack of them.

This would manifest itself in the following way in OEM screens:

Note the cardinality of the Pickler Fetch step is 0 {sorry, I cut off the column headings}. This resulted in this step having a cost of 0 and all the subsequent steps of having an expected cardinality of one and having very low costs – between 1 and 3 {Again, not shown, sorry}.

The end result of this was that the CBO struggled to accuratley cost any given plan and came up with several, usually quite terrible, plans that it swapped between as other table stats varied. The CBO was picking between very complex plans with total costs of 100 or 200 or so! Any difference was “significant”.

Please note, OPTIMIZER_DYNAMIC_SAMPLING was set to 4 on this system and I tried hints and session settings to higher levels and they did not prompt the CBO to look into the array, on 10.2.0.4 at least.

In 11.1 things seem to be better, as is shown in the explain plan at the top of this post. The step has a cost. I have to confess, I have not tested this very much on 11 {and if anyone has, feel free to correct me/add enlightenment via comments or pointers to other sources}, but it seems to set the cardinality to the number of elements the Pickler Fetch finds in the first itteration. Unless it uses the same sort of trick Oracle 11 now uses for bind variables (detecting when the value supplied is out of range and generates a new plan) this is going to lead to the old and much ‘loved’ issue of the plan being fixed by the first execution, irrespective of how suitable that plan is.

How do you fix this issue? Well, I resort to the cardinality hint. Usually the number of records being passed into the array is not too variable and any half-decent value is better than nothing in Oracle 10 and before. As for in 11, I like stating the value rather than risking a variable ‘first seen at parsing time’ setting. It is a judgement call. The below is from 11.1 but I’ve used it extensively in 10.2, where the impact is much more significant:

Note the change of ROWS to 11 in step 5. In V10 this is a change from blank to 11 and in real situations, do not be at all suprised if the plan changes dramatically – away from nested loop access and more to hash joins. {I should note, the cardinality hint is not documented in Oracle 10 or 11 and any use you make of it in live code is your responsibility. Sorry about that}.

What about my second point, about testing them? Well, as an example of testing Pickler processing of SQL arrays, which are defined SQL types, this is what I had to do to run my basic test:

I had to create a table type, which is the SQL array, and this was based on an object type which I had to create first {you can have table types based on standard SQL types but very often they are based on a “row” object}. After creating the stored procedure, I had to define and populate the array with a set of records which I then passed in to my procedure call . {If you want to repeat this yourself, check out my postings on IOTs to get the table creation statement for table CHILD_HEAP}.
Now, I created those types so I had access to them. If those types do not belong to you you have to be granted execute on the types to reference them. Not select, execute. Some sites have a pretty strict attuitude to granting execute on anything and types seem to get forgotten when the execute priviledges against packages and procedures are set up. In a recent situation I had, I was forced to do some testing work on Live and it had taken people with big sticks to get me select access on data. Execute privileges were totally refused. Calmly explaining why it was needed and how it was acceptable fell on not so much deaf as bricked-up ears.

So, for testing, the reference to an array passed in is often replaced by a little sub-select. After all, quite often what is being passed in for a pickler fetch is actually a set of records {but a subset of the rows} from a table that has been collected by a previous processing step.
As an example of such a change:

All that has changed is that we now have a little sub-select rather than the casting of the SQL array into a table and, heck, as the developer might say, those were the records that would have been passed in, the code still works as expected and the same data comes back. No identifying which object types you need, no getting the execute permissions, no populating it yourself in the test harness, we can just swap back in the array later.

{I apologise to good developers, who even now are throwing imaginary darts at me. I know Dawn, you would certainly not do this. But I’ve seen it a couple of times. Developers have enough on their plate to go worrying about esoteric aspects of the CBO}

But the thing is, Oracle can look at that select and evaluate it’s cost and get an expected cardinality. The pickler fetch version has the issues I’ve just covered. I had to deal with a situation just like this last year, it does happen. In dev it was fine, in pre-live testing it was not.

What about memory leaks? Well, I had lots of issues with SQL arrays and memory leaks with oracle 10.1. and 10.2 at one client site and there is a documented bug in oracle 8 with pickler fetch and memory leaks but I have to confess, a quick metalink search did not find any hits for Oracle 10 and 11. So maybe you should not trust me on that one. In the situation I saw the arrays were massive, several MB at times, and so if you are using SQL arrays to pass in a modest list of eg accounts or customers, it is not going to be an issue anyway.

You know, this was just going to be a quick post on something I’ve been meaning to mention for months, not a small essay :-).

Like this:

LikeLoading...

Related

Hi Martin
[blockquote]It is where Oracle works through a PL/SQL array[/blockquote]
It’s an SQL array, since PL/SQL types are not visible in SQL, even with a pipeline function returning PL/SQL type (corresponding SQL type is created behind the scene).

I think you are on the right lines when you say that the pickler fetch came from the serialization of the XML object – Now that Timur has prodded me in the right direction I would hazard a guess that some casting process was changing the XML object into a SQL array and then the Pickler fetch was working on that array.
I’m glad you had memory leaks (well, I’m not but you know what I mean). I think there are several known memory leak issues with XML processing in 9 and 10.

*thinks*… Yes, I see what you mean.
I’ve been lazy in my consideration of this haven’t I? The fact that you have to create a table type in SQL, often based on an object type you previously created, means it is in the SQL domain. In my experience I’ve ever only seen it when PL/SQL is used (result of a stored function or the casting of an array as a table as I show) and so I class it in my mind as part of PL/SQL.

Now, dynamic sampling was not extended to TABLE function until something like 11.1.0.7.

So, whilst with dynamic_sampling there are certain sanity checks that kick in which give the impression that it’s not being applied and which can sometimes be circumvented using the hint dynamic_sampling_est_cdn, that wouldn’t make a difference here.

Once you are on a version where dynamic_sampling is applied, then the level you used shouldn’t make a difference with the TABLE function because the whole collection will be sampled

Ignoring any features like cardinality feedback or adaptive cursor sharing (which should kick in for the TABLE function / collections ), as per any other shared/shareable sql plan, everything will depend on what was peeked/sampled at the initial hard parse.

But …thinking about it … for collections I would generally say that cardinality or opt_estimate is preferable to dynamic sampling.
Firstly, because in general you shouldn’t be using collections larger than a certain size and secondly, there should not be such a variation in size that an appropriate cardinality hint does not do for all.
If this is not the case, then this might be an indicator that you’re not using collections appropriately.
Not sure about that last bit.

Historical bugs notwithstanding, I’ve made extensive use of arrays etc without any memory leak issues.
As with any feature, the main danger is misuse. For example, why bring data into a collection in
expensive PGA if you achieve what you need with a single sql statement
(i.e. “select … bulk collect.. followed by forall insert” versus “insert … select”).

Thanks Dom. It seems some of your attempts to post ended up being caught up by WordPress’s spam filters. I think I’ve retreived and cleared up now. Let me know if I’ve removed something you expected to see.

Dom, that is brilliant. Thank you. I’m glad to see that from later 11.1 versions dynamic sampling is extended to the table function. That could be a bit of a problem if you are passing in large arrays, as you say, but then I agree with you that passing in large arrays is probably indicative of a sub-optimal design.

If you have problems with long comments again, email me the text and I’ll add it to the body of the post with the relevant citation.