Update to Decoding High and Low values February 24, 2010

After what seems an age, I finally got a little time to enhance my col_stats code with the extra info people provided as comments tothis original posting on it. I also now handle the translation of dates correctly and also timestamps.

I had a real struggle with timestamps. I knew I could find the answer in Tom Kyte’s “Expert Oracle Database Architecture” as one of the comments said so, but I tried to work it out myself as my copy of the book was at my Mothers {no, she is not learning Oracle Databases, I left it there by accident}. As the other elements of timestamps are held as numerical elements stored as hex (number of days, number of months) I tried to interpret it like that. I was being too complex, it is just an 8-digit hex number – to_number(value,’XXXXXXXX’).

Anyway, this is new-improved, cleanes whiter-than-white script {I’ve increased the width of myt blog to better show code like the belwo, but try clicking on “show source” if you want a plain text version}:

Yes, I know, it is a considerable chunk of code just to get the high and low values for columns, as identified by the last stats gather to be run on the table. {it is important to remember that these high and low values will only be as accurate as the stats gather that was run and when. An “estimate_percent=> 100” run a minute ago will give probaby give accurate values, an “estimate_percent=>0.1′” run last month is not – which is usually what I am checking for}. I could make the script a lot neater by writing a set of functions to call to get the translation, as was suggested as a comment on the original posting and is in fact somethign I have done in the past, but one is often not allowed to put new stored PL/SQL code onto production systems, so a script which does it is, for me, best.

I should show some proof of it working, I guess. The below will create a table with a nice spread of column type and three records with a reasonable spread of data:

I could go on and look at handling intervals and time zones but I leave this to you as an exercise. In other words, I have no real need for those data types right now and can’t justify the time! I hope the above is enough to answer whatever questions you may have about the high and low values of your columns…

Advertisements

Like this:

LikeLoading...

Related

Personally, I don’t like the column being called “nullable”, I think of it is Mandatory or not, so I flick the value from N to Y (or Y to N) and lable it M, for Mandatory. After all, when you create tables you set columns to “Not Null”, not “Nullable”. ie you specifically set the column to be mandatory.

It’s all semantics, so if you prefer to see ‘N’ for nullable, remove the decode.

Great post!!!
I am working on a database sampling technique and use a sample database for which I need a lot of tables as an appendix with information on size, colums, statistics etc. The Oracle raw format for low and high values would’ve looked like hell in the appendix, but given your help they’ll be neat!

I’m glad the blog helped you 🙂 Sorry it took me a while to approve your comment, I was unplugged from the internet for a week (somthing I like to do once in a while). Good luck with the database sampling. If you have not done so already, you might want to check out the SAMPLE aspect of SQL, I plan to blog about it “soon” but the manuals are pretty good on the topic.