You are here

RDBMS Insight

:Earlier this month, I conducted a totally unscientific survey on Twitter, asking where people got their Oracle news from. Twitter and the NoCOUG Journal were two popular sources, along with a wide range of blogs. Here are some of the blogs that the Oracle DBA & Dev superstars in my Twitter like to read:

In my last post, I loaded US SSA names data into my dev instance to play with. In this post, I’ll play around with it a bit and take a look at the name “Brittany” and all its variant spellings.
I found nearly 100 different spellings of “Brittany” in the US SSA data thanks to a handy regexp:

SELECT name nm,SUM(freq)FROM names
WHERE regexp_like(UPPER(name),'^BR(I|E|O|U|Y)[T]+[AEIOUY]*N[AEIOUY]+$')AND sex='F'GROUPBY name
ORDERBYSUM(freq)DESC;

The regexp isn’t perfect. It returns a few uncommon names which aren’t pronounced “Brittany”: “Brittiana”, “Brittiani”, “Britane”, “Brittina”, “Britanya”, “Brittine” – and one I’m not sure about, “Brittnae”. But on the other hand, it did let me discover that 7 “Britttany”s applied for SSNs in 1990. Yes, that’s “Britttany” with 3 “T”s.

Fortunately, all the “non-Brittanys” the regexp returns are quite uncommon and not even in the top 20. So the regexp will do for a graph of the top spellings. Let’s get the data by year and look at the percentage of girls in each year named Brittany/Brittney/Britney/Brittani:

I graphed this in SQL Developer:
From the graph it’s clear that “Brittany” is by far the most popular spelling, followed by “Brittney”. The sum of all Brittany-spellings peaked in 1989, but “Britney” has a sharp peak in 2000 – the year that singer Britney Spears released Oops I Did It Again, “one of the best-selling albums of all time” per Wikipedia.

This makes Brittany, however you spell it, a very early-90s-baby kind of name. “Brittany” was the #3 girls’ name in 1989, behind Jessica and Ashley, and was not nearly as popular in decades before or since. In subsequent posts I’ll look some more at names we can identify with specific decades.

One of the amazing things about being a DBA/developer in 2016 is the sheer amount of freely available, downloadable data to play with. One fun publicly available data sets is the American Social Security Administration names data. It contains all names for which SSNs were issued for each year, with the number of occurrences (although names with <5 occurrences are not included to protect individual privacy).

What’s so fun about this dataset?

* It’s already normalized

* It updates only once a year, and then only by adding another year’s worth of data, so it’s easy to keep current

* Almost everyone can relate to this dataset personally – almost everyone’s name is in there!

* At about 1.8 million rows, it’s not particularly large, but it’s large enough to be interesting to play with.

The one slight annoyance is that the data is in over 100 files, one per year: too many to load one-by-one manually. So here’s a blog post on loading it into your Oracle database, with scripts.

You can see that Emma, my grandmother’s name, is having a bit of a comeback but is nowhere near the powerhouse it was in the 1880s, when 2% of all girls were named Emma. (For the record, my grandmother was not born in the 1880s!)

Recently I ran across this post on how to do subgroup medians in Excel 2010. First you need to create a pivot table, then “do some copying and pasting and use a formula to make it happen”. In SQL you can do this with one command.

Suppose that you have the same table as the Excel article, something like this:

Do you have a sql*plus user who really needs an April Fool’s joke played on them? With a little editing to their glogin.sql, every sql*plus session will exit with what appears to be a pseudo-random TNS error.
(Note: assumes a *nix environment that has sed, grep, awk installed and oerr properly working.)

I made Brian’s acquaintance through the MOS RAC Support forum, where Brian stood out as a frequent poster who consistently gave well-thought-out, correct and informative responses. So I had high expectations when I sat down with his book. And I haven’t been disappointed. This book is a terrific resource for single-instance DBAs looking to come up to speed on RAC. It’ll also be useful to more experienced RAC DBAs who want to deepen their knowledge or who just have a thorny performance problem to solve.

Many RAC books start out with an overview of RAC-specific physical architecture: the interconnect and the shared storage. Not this one. Brian leaps straight into what I consider the “hard” stuff: chapter 2 covers Cache Fusion and understanding RAC-specific wait events. I’ve spoken with many RAC DBAs who’d have a hard time telling me the difference between “gc cr block 2-way” and “gc current grant 3-way”. You really need to understand Oracle’s implementation of Cache Fusion to understand many of the RAC wait events, and Chapter 2 does a good job of explaining, using session tracing to step you through the waits. It might seem odd to start out with detailed explanations of wait events that many RAC DBAs will never see in the Top 10. But, a good understanding of Cache Fusion and the related wait events is really necessary to understand RAC-specific slowdowns. Subsequent chapters depend implicitly on this understanding: you can’t really understand interconnect tuning, for instance, unless you understand how the interconnect is used by Cache Fusion.

The book covers a full toolkit of testing utilities and tools as needed: Orion is introduced in the chapter on storage, and then a full chapter is devoted to the RAC Support Tools, another to AWR/ADDM/ASH, and another to benchmark utilities. There are also dozens of SQL scripts.

Another chapter to highlight is Chapter 14, a two-page summary at the end of the book that lists what Brian considers the central points. This is a mix of broad principles and RAC-specific “gotchas” that every RAC DBA should be aware of. I’d say that if you can read through Chapter 14 and say “I knew that” to each point, then you’ve got a good grasp of the essentials of RAC tuning.

Like others I’ve read in Burleson’s Oracle In-Focus series, this book would’ve benefited from a stronger copy editor. I was chagrined to see typos right on the back cover. But that’s a small quibble that doesn’t detract from an excellent book. If you’re a RAC DBA, this book deserves a place on your bookshelf.

Earlier this week I got tangled up doing a Roman Numeral conversion in my head. So of course my second thought, right after “Doh!”, was “I bet I can write a SQL statement to do this for me next time.”
The algorithm to convert roman numerals to decimal numbers is straightforward.

For each character, starting from the RIGHT (lowest value Roman numeral):

Convert the character into the value it represents

If the character’s value is greater than or equal to the previous one, add the value to the running total

If character’s value is less than the previous one, subtract the value from the running total

This is a lovely place to use Recursive With in SQL. We’ll use a CTE, aka recursive subquery, to chomp the string one character at a time from the right, keeping a running total as we go; first decode the character, then add or subtract as appropriate to the running total until we’re out of characters.

Now finally we can add or subtract ThisDec from the running total using the rule:

For each character starting from the left,

If character’s value is >= previous character’s value, add it to the running total

If character’s value is < previous character's value, subtract it from the running total

To implement this, I added another column to the recursive member, LastDec, which just holds the previous value of the ThisDec column. I also could have used the LAST analytic function here, but since we’re using recursive subquery it’s trivially easy to just populate the LastDec column with RomToDec.ThisDec .

Tired of navigating to the SQL documentation every time you need to look up syntax? I created a search plugin so that you can search the SQL documentation directly from your browser’s search bar:

If you’re going to be doing a lot of looking-up, you can make this your default search engine. Click on “Change search settings” in the search bar dropdown, or go to Preferences > Search and select it:

I also created a search plugin for the 12c database documentation as a whole.

To install either or both plugins in Firefox, go to this page and click on the “Install Search Plugin” button.

Tested in Firefox. OpenSearch plugins reportedly work in IE too. Use IE? Try it and let me know if it works for you.

UPDATE: I added these plugins to mycroftproject.com as well. Thanks, Uwe, for pointing me to it! Also, check out Uwe’s OERR search plugin in the comments below.