Although variable and the column specified as “integer”, but in both blocks you will get error: ORA-06502: PL/SQL: numeric or value error: number precision too large.
Same error will be with “int“.
Describe shows wrong datatype:

Earlier “Event 10032 – Dump Sort Statistics” showed information about sorts only(11.2.0.1 incl.), but in 11.2.0.3 it also showing “Abridged” call stack. I have not tested it on 11.2.0.2, so I can’t tell from which version it works.

In the following, we do not have a distributed transaction when we just insert /*+ append */ into local_table select * from remote@table; – we just have a single site transaction.

But v$global_transaction(but v$transaction), another part of documentation and v$lock(type=’DX’) assure that it is.

Distributed statement: A statement that accesses data on two or more distinct nodes/instances of a distributed database.
A remote statement accesses data on one remote node of a distributed database.

Who is right?

But it not so interesting as my another finding: commit after simple “select * from dblink” would be “read-only” commit on local db, but on remote it became “read-only” rollback! Except cases when there was any dml in that transaction(with no difference local or not). But how Oracle determining: read-only or not, execute commit or rollback?

Yet another interesting thing: If we do a few times “commit” after “select from dblink”, then, as expected, rollback will be executed only once on remote. But when we closing our session, there is another commit on remote(real read-only commit).

A couple of days ago someone posted a question on the forum which at the first glance seemed old, boring, beaten up and down. Here it is:

There is a news feed. All news are divided into 10 categories (Politics, sport, auto, real estate, etc).
I need to get top 4 news sorted by time descending for each category with 1 query.
If you sort the results – you get 4 politics news, then 4 sport news etc.

But the task was to make it optimal, and the standard solution with usual TopN using row_number can not be called optimal in any way, especially in case of big tables, relatively small number of categories and uneven distribution or just overall low selectivity.

So after getting several more or less acceptable variants, and spotting the solution using PostgreSQL (I didn’t dig deep into it, as I saw recursion, min and predicate, and that was enough), I got a great variant.

Of course, everyone knows about “unwrappers” and I guess many of you even tried to make an unwrapper of your own (I also tried to make one after Pete Finnigan’s presentation:). But nevertheless its funny that Oracle in its documentation for 11.2 “loudly” states that unwrap does not protect anything from viewing, and even gives a direct link to the online unwrapper 🙂

Here is a comparison of unwrap capabilities in documentations for different versions:

In 9.2 everything is safely hidden, except for literals, names of variables, tables and columns:

String literals, number literals, and names of variables, tables, and columns remain in plain text within the wrapped file. Wrapping a procedure helps to hide the algorithm and prevent reverse-engineering, but it is not a way to hide passwords or table names that you want to be secret.

In 10.2 the data is hidden only from “most users”, but at least it makes reverse-engineering difficult!

Although wrapping a compilation unit helps to hide the algorithm and makes reverse-engineering difficult, Oracle Corporation does not recommend it as a secure method for hiding passwords or table names. Obfuscating a PL/SQL unit prevents most users from examining the source code, but might not stop all attempts.

In 11.1 everything looks humble and boring:

Wrapping is not a secure method for hiding passwords or table names. Wrapping a PL/SQL unit prevents most users from examining the source code, but might not stop all of them.

And oracle invalidates its results very frequently without any changes in table or function. I found only 2 cases when oracle invalidates result_cache results without any changes in table:
1. “select for update” from this table with commit; (strictly speaking, select-for-update is DML and it changes blocks, so it’s ok)
2. deleting rows from parent table that has no child records if there is unindexed foreign key with “on delete cascade”.
I test it on 11.2.0.1, 11.2.0.3, on solaris x64 and windows. Test cases for this i will show below.
But none of them can be the cause of our situation: we have no unindexed fk, and even if i lock all rows with “select for update”, it still does not stop invalidating.
In what other cases this happens? Am I right that the oracle does not track any changes, but the captures of the locks and “commits”?Continue reading→

Explanation
It is clear that this bug is platform-depending and that the matter is in specifics of working with memory. The answer lies in the fact that cl3 and cl2 have unsynchronized lengths, which means that Oracle “forgets” to change the lengths of all remaining variables, that point to this “clob”, and as every operation of changing cl2/cl3 in fact changes the same thing, it turns out that “superfluous” becomes overwritten.

This problem is old actually, but it became possible to find out what is the real matter here and to deal with it only now thanks to the “Oracle Core” book by Jonathan Lewis.

Here is a quote from the chapter “Latches for lock”:

If the enqueue resource is in place already, then pick a row from the relevant enqueue structure (x$ksqeq, et al.), but to do this you have to get the associated enqueue latch to stop other people from picking the same enqueue row at the same time. The latch you need to acquire depends on the specific type of enqueue you are using; for example, if you want a row from x$ksqeq you need to get the enqueue latch but for a row from x$ktadm you need to get the dml allocation latch. Drop this latch as soon as you have made the enqueue row safe.

And this “fixed” table itself is in “v$lock”, which I had repeatedly seen in the plans for it:

Now as we know the root of the problem, what’s left is to find out if we can escape unnecessary requests to X$KTADM, which seems possible considering that it’s a part of “union all”.
So we had to “parse” this v$lock, to get the full text of the query. To do this I began with getting the query text by tracing 10053:

By the way, as you can see gv$lock varies in different versions, which has to be taken into account (I haven’t tried it on 11.2.0.2 so I will update it later):
in 11.2.0.3 predicate “l.raddr=r.addr” was changed to concat(USERENV(‘Instance’),l.raddr)=concat(r.inst_id,r.addr).
Only particular blockings were queried from v$lock in my case, where most important of them were “user-locks”, which means they were of ‘UL’ type. That’s why after parsing the code, I had to get what locks are returned by each particular “union all” block. For this purpose I have created a modified GV$LOCK:

with t as (select distinct sq,type from xt_gv$lock l)
select sq
,listagg(t.TYPE,',') within group(order by t.type)
from t
group by sq

Where “sq” is a number of “union all” block.
Or just get the block number using filter by block type. So, for example ‘UL’ will now be in the first block, and if we need to select them we can just make queries from this new view adding “sq=1” predicate to avoid torturing other blocks.

select *
from xt_gv$lock l
where l.type='UL'
and l.sq=1 -- ul in first block only

During a load testing in a concurrency my modified query with “user locks” not only completely solved the problem with latches, but also increased the speed of query processing by 200 times and lowered the CPU usage.

It will automatically initialize with the date, user ID, and process ID if no explicit initialization is performed.

This phrase does not answer the question, which “process id” is going to be used for initialization in case of parallel execution. That’s why I decided to give a vivid example that shows independence of “dbms_random” generator from “process id” of slave, that is generating identical values in parallel: