Linux, Oracle, Hadoop.

Oracle 11g: Reading alert log via SQL

Oracle has done some major improvements in the diagnosability infrastructure in version 11g. Here’s one little detail.

Before Oracle 11g it is possible to access the alert log via SQL using an external table or a pipelined function which in turn uses utl_file.
After reading the text you need to parse it to extract the information you need from there.

Starting from 11g Oracle does all this work for you. There is a fixed table X$DBGALERTEXT, when you query it, Oracle reads the log.xml from alert directory (which contains all the data what alert.log does), parses it and returns the details back as rows:

If you’re building some custom alert log monitoring, then starting from 11g these x$ tables can help you. On the other hand, I would prefer to monitor logfiles using plain and simple scripts as accessing this X$ table requires the instance to be up and operational. The better use case I see is that if you don’t have access to OS filesystem and nevertheless want to see alert log contents, that can be the easiest option in 11g…

Unfortunately, this is X$ table, which means that it is only readable when connected as SYSDBA. Nice thing, but it would be much more useful as a documented V$ table. Hopefully, Oracle Corp. will give us that, too.

– We have a requirement in one of our tables in our production db like this , The table name should be renamed and a column should be dropped.The table is a highly active table , with lots of sessions accessing that table continuously.It’s not possible to take any down time for this activity. Is it possible to do these modifications in that table when all the transactions are happening?

Every feature has issues if used wrongly or when something important regarding its use has been overlooked. Some features have issues even when you do everything according to the specifications.

If you can afford hours/days of downtime, you don’t need to test *that much* (just like in dev databases, just change the damn thing and if it doesn’t work, roll it back somehow). If you can’t afford the downtime, like the case for the person asking me the question, then test the hell out of your change to reduce the risk of problems and downtime.

There are some features like online table shrink, which work well in a training class on read-only tables with 10 rows, but may never complete or end up crashing when ran on large and very busy tables. I wouldn’t take DBMS_REDEFINITION lightly either and would *not* run it in production without testing it on a test database with equal data volumes and transaction activity (it’s harder to reproduce that but again if the system is SO critical that it can’t be down, then you should treat it as such also when planning changes)

Yep Jared you’ve got a valid point. Statspack also creates some X$ aliases (and some other monitoring tools too). I like your idea of having a PL/SQL package instead…

I don’t recommend to just create x$ aliases for all tables in production (I will also add a warning to my script which allows you to do so). Just for the sake of not “disturbing” Oracle if it’s not needed…

I had a weird experience using X$DBGDIREXT on RAC. The said fixed works on some of the nodes but not on all of the nodes. I did check the directory structure as well as unix permission on all rac nodes and they are the same. Does anyone knows where to start looking?