Archive for October, 2009

“O beware, my lord, of Jealousy! It is the green-eyed monster which doth mock the meat it feeds on” – so said Lago in Othello. (well according to othello anyway – I’m not well read enough to quote Shakespeare!).

Well another Oracle Open World has wrapped up without me attending it, and I’m brimming with jealousy at everyone that managed to get there. I guess I’d better beware in case I start mocking meat or something. It won’t last long, as soon as everyone else stops blogging about it I’ll forget for another year.

So whilst the rest of the oracle world was swanning around conference rooms listening to the likes of Jonathan Lewis, Tom Kyte and every other clever so-and-so what have I been up to?

The last couple of days I’ve been restoring a table that was accidentally dropped from a 2TB 9i db. Without going into details we ended up having to restore the whole database onto another machine as a standby and roll foward bit by bit, taking exports along the way. We weren’t able to retrieve the exact time / scn of the drop table command from LogMiner due to ORA-01374 (Cause: LogMiner does not mine redo records generated with LOG_PARALLELISM set to a value greater than 1.)

Obviously a problem.

I tried to create a file based log miner dictionary from prod, take it to a 10g instance along with the logs in question and use 10g log miner, but this wasn’t possible due to UTL_FILE_DIR not being set, so I couldn’t even write the file out. I wasn’t allowed to bounce the db to set this, so then I thought I’d try building it from standby but unfortunately this isn’t allowed in read only mode.

Of course, all this is going along with a rather large audience. The app is on the business’s Critical Application list, and this table can be updated a number of times per second. When the table doesn’t even exist, an entire page of the app is blank. Thankfully, the dev and support teams did a great job of analysing the impact and quickly building a new empty table and then populating with ‘dummy’ data.

The hardest part of the whole process was the logistics of it. 2TB is a lot of space to just ‘find’ at the drop of a hat. We eventually had to clear out a number of other UAT databases to make room for it.

Something that jumped out at me from this, was how good everyone is around me. Not once did I hear a raised voice from management. There was only ever calm, concise discussion on the best way forward. I never felt under pressure to rush something without being able to think it through properly. This is in stark contrast to other places I’ve worked, but I think this comes from trust in each other’s ability. I’ve now been ‘aligned’ to this application team for about 2 years. We sit next to each other, work closely on a daily basis and the benefits are really seen during times of stress, such as this.

Another positive gained from this experience was the value in having other dbas around. It’s great to be able to bounce ideas off each other, double check we’re not doing something incredibly dumb – which is always a risk when the pressure mounts.