If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

We are using version 8.1.6 on NT4. I have creatd sequence with cache 100 increment by 1 nocycle parameters. While inserting records through procedure it's giving me error saying 'ORA-001 Unique Constraint voilated'. I have checked following things...

I think there is a problem with your cache memory.
Try increasing value for SEQUENCE_CACHE_ENTRIES parameter in your init.ora file.
Also Oracle might skip sequence numbers if you choose to cache a set of sequence numbers .
Let me know if it works...

I suppose the procedure is not in Problem, ... What kind of application are you using? Is it Java. May need to check the Java code, becuase Oracle sequence can never be duplicated. If you are using multile databases, may be the application is accessing the other Database.

- Why did you need to do a to_number on the column? Shouldn't it already be a number? It is populated via a sequence after all.

- The cache is irrelevant. This only helps avoid contention when multiple sessions are trying to retrieve sequence IDs at the same time.

- Naming the column the same as the sequence is confusing at best. It may also be confusing Oracle. Same with a table named table. Of course, you may have done this for purposes of an example. If so, nevermind .

- Do you have a unique Index on any other column in the table, such as name, per chance?

- Finally, a minor point about how the statement is interpreted. Since you are using dynamic SQL with a hard-coded value, each statement will be different when passed to Oracle. This means that each will have to be parsed and each will become a separate entry in the SGA. It would be better to use binds with the dynamic SQL. Taking this further, however, dynamic SQL is not even necessary here. This would be faster *and* use binds...

Of course, I would again assume that you only did this for purposes of an example, but I just want to make sure you understand the issue in case your actual code does the same thing. Taking that thought even further: Commiting after each row is not necessary and entails quite a bit of overhead. Finally, if you were truly doing this in your app, I would suggest investigating the new BULK processing options to speed such an operation even further.