Posts by Alex Fatkulin

This will be the final post in my series on Result Caches. In my previous article, I had already got almost everything. Almost — four CPUs (cores) were still not enough to saturate the single latch. As you’ve probably already guessed, today we are going with an eight-way test. Please note that today’s numbers are different since I’m using an entirely different hardware platform. While the four-way tests were done on 2.4GHz Core 2 Quad box, today’s eight-way tests were done using four dual core Itanium 2 CPUs running at 1.1GHz. Let’s take a look at the results…

In my previous blog entry, I explained why I would expect Result Cache not to scale well. Unfortunately, at the time that blog entry was written, I had no access to hardware with more than two cores. That left me in an everything-but-the-proof state. Since then, I got a chance to re-run my test cases on a quad-core CPU, moving one step forward. Here is what I got:

I do realize that for most of you, there may be nothing new about the dbms_sys_sql package knowledge of it has been floating around for quite a while. I myself discovered this package a couplI do realize that for most of you, there may be nothing new about the dbms_sys_sql package knowledge of it has been floating around for quite a while. I myself discovered this package a couple of years ago while playing around with HTMLDB’s (now APEX’s) internals. I’m posting this as a response to a vox populi demand: I often see people who could benefit from dbms_sys_sql, if they only knew about it.e of years ago while playing around with HTMLDB’s (now APEX’s) internals. I’m posting this as a response to a vox populi demand: I often see people who could benefit from dbms_sys_sql, if they only knew about it.

Now with the 11g we all can start the new release — at least on Linux x86 — so naturally, I was curious to see whether I was just throwing sparks without having a fire. Query Result Cache was the first new feature that I put to the test, only to discover that you can spend more than two minutes waiting on… guess what?

Recently, I was asked to take a look at a query processing part of a client’s application which had started to perform slowly after the application had a maintenance downtime. The idea behind the processing was quite simple.The trick was to add a (redundant) CASE PROCESSED WHEN 0 THEN MOD(SUBS_ID, 7) end expression into ORDER BY clause..In this post I show you how I came to this solution.

n the following SQL*Plus output something is goofed up. You should not have a no rows selected with this SQL. When you see something like that for the first time, you can generally draw from four conclusions: Someone edited the output, SQL*Plus bug, Oracle bug, combination of the above. Since it was me who saw this, number one was an easy but irrelevant answer. What about an SQL*Plus bug? OK, run this using some other ad hoc tool. Same result. Time to take a more precise look at the problem. What is trans?

Before you proceed with reading this post, I strongly encourage you to read Tom Kyte’s trilogy about write consistency, since I’ll do only a brief introduction to the subject. The way Oracle ensures UPDATE write consistency is through a mechanism called restart. Let’s take a look at an example before we proceed with the main topic of this blog post, Will there be any difference if we substitute the following MERGE for the last UPDATE?

Let’s take the following question, for example. Is there any difference between using: where column between n and m and where column>=n and column<=m? Looks like a simple one, eh? hey are the same from a semantic point of view. But SQL is a declarative language. In other words, you wouldn’t expect same execution plan with two semantically identical statements, would you? There is at least one known (to me) example where both statement produce different execution plans. You never know until you test it. We start by creating a simple list-partitioned table with the local index: