Meta

Sequence Smackdown

In recent engagement, I come across a "smelly" construct (database smells) that looks like this:

Select max(error_id)+1 into new_error_id from error_log;

"Why aren’t they using a sequence?", I wondered.

The reason, of course, is that the PL/SQL developers need to request the creation of each and every object from the production support DBAs, and since such requests require review by the central data architects for correctness before being approved for creation in development, the process can take 4-5 days. As a result, they took this "shortcut". (Reason #392 of why I don’t think production support DBAs should have any place in the development process, but that’s another story).

The good news is that they recognized this was bad after I pointed it out, and they went ahead and requested the sequence.

One week later, we get the sequence, correct the code and promote it to the integration environment.

Where we promptly get uniqueness violations when attempting to insert rows into the table because the sequence number was less than the max(error_id) already in the table.

"No problem!", I said  I didn’t want to re-create the sequence with a larger "start with" (due to the turnaround time), so I take a lazy shortcut:

Declare I number; J number; begin select error_id_seq.nextval into I from dual; select max(error_id) into J from error_log; while I <= J loop select error_id_seq.nextval into I from dual; end loop; end; /

Yes  I know this is all kinds of horrible, but I was in a hurry and didn’t think.

And the worst part is that it didn’t even work.

They still got uniqueness violations and came to me later saying that there were problems with the sequence  that when they selected maxval from the sequence in TOAD they got one value (1000), and when they selected maxval from the sequence via SQL Developer, they got another value (300).

What did I forget / do wrong? What should I have done?

I eventually figured it out and "fixed" it.

There’s a coda to this  after I smacked the palm of my hand to my forehead and then explained the problem to the PL/SQL developers I thought they understood it. But later in the day they came to me and said they were having the same problem with a different sequence (getting different  and much smaller  values when selecting maxval from different tools)

4-5 days … luxury! I worked at a place (not far from where you are in Vienna as it happens) where it could take up to two weeks to a simple report change propagated to production. Good times …

Anyway, I have to say that I do use this max(value)+1 construction myself (actually I use max(value)+rownum really), partly in order to reduce the complexity of deployments to new environments that can come with sequences. However it’s only in situations where the code guarantees that multiple sessions are not inserting new rows at the same time.

So production DBAs shouldn’t have any place in the development process? According to your narrative, (a) there is a process in place where any changes are supposed to be reviewed by data architects, and (b) a DBA will enact those changes when the review is done. Any changes will need to be rolled forward to production eventually, and the DBAs will be responsible for handling the mess left by the developers who don’t want to follow the process and do things right to begin with. A couple of posts ago you were talking about change management, and now it seems like you’re justifying the actions of the developers to further your obvious disdain for production DBAs. Even if there is a development DBA who is responsible for creating objects in a development database, if they follow the process it still needs to be reviewed by the data architects and others. It saves what, about one day at the most?

A day? More like 4-5 days per change. And if there’s a production outage or corruption issue going on in the meantime? Forget it — the production DBAs have development support as their lowest priority — and rightfully so! They need to focus on production, not development.

I’m not justifying the developers’ actions — merely describing their motivation. During my code review I reported their mistake and had them go through the process of correcting it.

I advocate splitting DBA teams into 2 groups: production support (managing production and QA), and development support (managing development and integration). Development support DBAs make ALL decisions about logical and physical designs and are responsible for change management and schema installation guides.

Production DBAs follow these installation guides for QA and production deployments. If the guides are wrong or result in errors, the development DBAs are held accountable.

The development DBAs are held fully responsible and accountable for schema installation errors and performance issues that occur as a result of their choices. That means that when there is a performance problem, the development DBAs get paged.

The production DBAs are held fully responsible for backups, restores, disaster recovery, remote replicas and space management.

I’m not disdaining production DBAs — I’m just saying that its not fair or smart to have them perform critical production support tasks and then only secondarily perform development tasks. Any organization trying to rapidly bring new capabilities online will be short-changed by that arrangement.

As for review by data architects, I’m fine with that if they are in the development organization — again, such review should benefit the application teams, otherwise it has no value — so put them in the same organization and hold them accountable for the same goals. Those goals should be the rapid development and deployment of useful, flexible and high-performing applications in support of the business and mission objectives.

Each step in the process should either provide additional value or it should be re-examined or eliminated.

A problem with sequences can be that they are removed from SGA (when memory space is needed). Next time nextval is called you don’t get seq+1 but seq+cache_size+1. So you get holes.
Any idea how to avoid this?
I know you can pin them to not be removed, but at least in cluster environment this doesn’t look to be a good idea.

About

Dominic DelmolinoVienna, VAUnited States

I've been using Oracle since 1990 and spent 10 years working there as part of the System Performance Group and 5 years as the Senior Director of Database Engineering at Network Solutions. Currently, I'm Vice President of Systems Architecture at Agilex Technologies.