Sequences are not guaranteed to generate all consecutive values starting with the ‘START WITH’ value.

It’s absolutely true that sequences don’t guarantee no gaps – but the fact that start with no longer specifies the starting value either is a bit rich in my view. Anyway Martin Widlake ran into this issue and has blogged about it here today. I posted a quick comment and then was going to re-run his scripts with and without deferred_segment_creation=true on my 11.2 instance. My test script is exactly the same as his (copy and paste is wonderful) but includes an initial SHOW PARAMETER line and the addition of a set serveroutput on line. My results are not the same.

The difference. This database has been upgraded to 11.2.0.3, however there doesn’t appear to be a reference to a fix for this behaviour in either the 11.2.0.3 bug fix list or that for 11.2.0.2 . I guess the lesson is the same old one, make sure you run your regression tests on all new releases including patch sets and be alert for silent behaviour changes.

As in previous years the UKOUG allows you to create a personalized agenda for the upcoming conference. To give you a flavour for some of the excellent presentations and to encourage you to register I’ve reproduced mine below, as with previous years this is a ideal wishlist, I’ll probably “die” halfway through various days and so take a break, or get caught up in the many networking and informal learning opportunities this conference gives.

Sunday

12:15 – 13:15 Tales From The OakTable – Mogens Nørgaard

Mogens is always entertaining and thought provoking, a great way to start the Oak Table Sunday.

The advent of affordable reliable, enterprise ready SSD storage, for example FusionIO cards or SSD Arrays likeViolin, is probably the biggest game changer for I/O intensive workloads such as database operations for years. I’ll be interested particularly on whether and how the smart flash cache works compared to, say, locating datafiles directly on reliable low-latency flash storage.

12:30 – 13:30 Oracle RAC One-Node: What, Why, & How - Asif Momen

I’ve followed Asif through the OTN forums for some while now, so I’ll be interested to hear what he has to say on this new Oracle feature.

Update — My attention has been drawn to this follow up. Had I read the follow up before the blog entry was written I’d have worded it somewhat differently or not published at all.

There’s a rant against the document oriented database MongoDB doing the rounds. You can find it on PasteBin. You can also find a somewhat official response on YCombinator. I’ll leave you to assess which is the more reasonable and professional. Having read through the original rant and the response it seems to me that the root cause is listed in this early paragraph from the rant, emphasis mine.

Databases must be right, or as-right-as-possible, b/c database mistakes are so much more severe than almost every other variation of mistake. Not only does it have the largest impact on uptime, performance, expense, and value (the inherit value of the data), but data has *inertia*. Migrating TBs of data on-the-fly is a massive undertaking compared to changing drcses or fixing the average logic error in your code. Recovering TBs of data while down, limited by what spindles can do for you, is a helpless feeling. Databases are also complex systems that are effectively black boxes to the end developer. By adopting a database system, you place absolute trust in their ability to do the right thing with your data to keep it consistent and available.

In computing in general, a black box program is one where the user cannot see its inner workings (perhaps because it is a closed source program) or one which has no side effects and the function of which need not be examined, a routine suitable for re-use.

It seems likely that the author of the rant had the second definition in mind. It was certainly this that rang warning bells in my mind. The author up-front seems to be saying both

that databases are absolutely critical and important to the success of application development and migration.

databases internal workings are not understood by the developers.

If both of these statements are true then all sorts of problems are almost bound to follow. Take complaint 1 and the response from the Mongo guys.

**1. MongoDB issues writes in unsafe ways *by default* in order to win benchmarks** If you don’t issue getLastError(), MongoDB doesn’t wait for any confirmation from the database that the command was processed.

================

The reason for this has absolutely nothing to do with benchmarks, and everything to do with the original API design and what we were trying to do with it. To be fair, the uses of MongoDB have shifted a great deal since then, so perhaps the defaults could change.The philosophy is to give the driver and the user fine grained control over acknowledgement of write completions. Not all writes are created equal, and it makes sense to be able to check on writes in different ways. For example with replica sets, you can do things like “don’t acknowledge this write until its on nodes in at least 2 data centers.”

It goes without saying that this behaviour is documented, the only way you’d run into problems (other than blindly relying on defaults) would be if you assumed that the database would do what you wanted without checking its actual behaviour. Similarly the author rants against several more well understood characteristics both of the product and the problem set that MongoDB addresses, including at least.

changing sharding scheme under load.

recovery model implications.

replication operational implications.

Well I’m afraid it seems to me that if you select as a core part of your project a complex, configurable and documented distributed infrastructure component and then proceed to treat it as a black box then you really are asking for trouble.

Possibly Related Posts:

When its time to start booking hotel rooms and planning your agenda for the UKOUG conference in Birmingham (UK) for the 4th to the 7th December. Yep, all the cool sessions from Oracle Open World, minus an awful lot of the pure marketing. So just to wet your appetite here are my highlights in advance .

Oak Table Sunday. A whole day of wit, wisdom, argument and education from World Renowned database experts. Probably beer as well.

I’ll be speaking too, an entry level “things really, really not to get wrong” session, and a kindly scheduled advanced session on statistics, data lifecycle, obscure optimizer issue and er getting decent execution plans at 16:15 on the last afternoon that last one might end up being a pub discussion if I’ve anticipated numbers correctly (0) .

Anyway, especially if you’re employer is a UKOUG member, you should be there. If you want a flavour of OakTable discussions – arrive on Sunday and book.

Possibly Related Posts:

One of the major announcements at Oracle Open World last week was the launch of Oracle Enterprise Manager 12c, though I’m going to refer to the product as em for the rest of this blog. EM is a product that I both love and which completely infuriates me from time to time. Its worth understanding my reasons for this attitude before we look at the new release. First up then why I love it.

Database Performance Pages
The database performance pages, also available in database control of course, by and large focus on exactly the right things, namely response time and throughput. Moreover they provide a clear picture of database performance that is straightforward for humans to understand. A good picture almost always gives clarity and understanding more quickly and to a wider audience than text, as Florence Nightingale so eloquently understood about 150 years ago. The problem with many data visualisations is that they hide the core message, for example here. The em performance pages nearly always show the right thing clearly.

Central Repository. So many infrastructure management tasks benefit from a central repository of observations. I’d hazard a guess though that most database infrastructure management is still done via scripts. This approach means that items like capacity planning, comparison of time periods and so on are not readily available or rely on the knowledge of the individual administrator both of the product and the environment that they are administering.

Management templates. In my experience many environments end up with either different versions of common scripts monitoring their infrastructure or else different subsets of scripts monitoring the infrastructure.

In short then I love EM because it enables me to roll out clear, consistent, sensible management across a client, and moreover to use nice graphics to communicate deep technical issues clearly to non-technical staff. What’s not to like? Well actually this list of pet peeves.

Navigation. The navigation in EM10 and 11 is, frankly, appalling. Multiple lists of links, the same list in different orders on different pages, the same page having 2 identically labelled links going to different locations. Then there’s the use of the back button or rather the lack of a reliable back operation – combined with the MOS experience someone, somewhere needs to tell Oracle that breadcrumbs are not the only navigation aid.

UI. I love the database performance pages as I said, yet when you navigate to the performance pages for non-oracle targets you get a completely different experience with different graphics, often focussing on different things. Oh and using a different technology and with a different look and feel. UIs really do need to be consistent.

Support. Historically a new release of the database, or a new patch version of SOA has resulted in your Enterprise Monitoring solution being uncertified against your enterprise technology stack – and in Oracle support refusing to take calls.

Security Configuration. A lot of the Oracle inspired articles out there seem to imagine that the infrastructure of your typical enterprise has no firewalls, common passwords, ubiquitous sudo access and so on. So for example you are expected to be able to log on remotely to database servers from the management server or servers as SYS, the firewalls are expected to allow network traffic through on all listener ports from the OMS and back to the OMS on various upload ports etc. Deployment assumes that sudo is available, passwordless ssh is permitted and so on.

This release brings a number of significant architecture and UI changes. I thought it would be useful to evaluate the new release, and especially compare it to the lists above. My usual approach when looking at a new Oracle product is to fire up a new CentOS vm via Virtual Box . Then in conjunction with the documentation available at the OTN docs site. For EM10 this is a perfectly reasonable approach. With em11 I also was able to get away with it. For em12 the minimum specs haven’t changed much from 11, but you really do need them. That means you will want

A database server with at least 2g ram available for the db.

An application server with at least 4gb ram available.

The poor old laptop that I have here wasn’t going to cut it. I did manage to successfully install the product on a single server with 2gb ram, but the install itself took 5 hours.

For the purposes of this exercise therefore I setup an Amazon AWS VPC environment as follows.

db target server – also type m1.large though I could have got away with a small server here.

overall I left this setup running for 3 days and with the CPU usage and data uploads this ended up costing me ~ $70 including taxes. Its likely that most test instances of em12 will have to end up on production hardware/vms. Its also worth noting that the installation still took 1.5 hours. I’ll not cover the installation here because

The main points to note are that the installer is much, much simpler to use, Weblogic 10.3.5 is not separately required but that a certified database is. Unlike in previous releases the installer will correct various of the common pre-requisite failures for you and allow you to correct others after the event. When the product is installed however you discover the major architecture change in EM 12. EM12 effectively now provides a monitoring and alerting framework and monitoring plugins that are separately developed and maintained provide the actual target management functionality. Even MOS integration is a plugin rather than a core feature now.

How do we do against my pain points then?

Navigation.

EM12 provides a menu driven application navigation style rather than the sea of links management style. Moreover, unlike the sparse adoption of this in em11, this navigation is pervasive. I at least won’t be left scanning the columns of links at the bottom of the page thinking “I know the all metrics link is here someplace”.

In addition em12 introduces the concept of job related home pages, so a DBA can setup a Databases home page, an IT manager can have a management overview and so on. This functionality was available via groups in earlier versions but required manual setup and careful thought.

UI:

I think the best way to illustrate this is with a couple of screenshots. The first is the new database home page.

I hope you’ll agree its clear, straightforward and well laid out. The lists of links are banished to be replaced by a neat menu structure shown in action below.

overall then the ease of use experience as compared to prior versions is a fantastic improvement.

Support:

Here I can only give a qualified thumbs up since we will not know for a while how the new plugin architecture works. In principle requiring product development teams to update the plugin with each new patchset/release should result in a much better ownership experience, especially for thise customers who have been told in the past to upgrade their enterprise monitoring system to stay certified when they apply a patchset or point release in order to bug fix the technology product they really care about. In practice it wouldn’t surprise me to see plugins trail product updates by some considerable distance.

Security:

Here things are much as they were in em11. I still think that in many organisations the required configuration will mean much rewriting of security policies in order to allow the management host to connect through corporate firewalls, run privileged o/s commands and so on. Bear in mind also that to make the best use of the MOS integration many organisations are likely to end up exposing the host that has these privileges to the internet.

In conclusion I am very much impressed with em12 , the architecture and UI decisions look to be broadly the correct ones and the areas of weakness appear to be being addressed.

Well here’s a blog entry I didn’t think I’d write. One that comes as a direct result of a request from Oracle Corp, specifically Oracle Education. The email I received is reproduced below.

Today we are releasing an Oracle Database Job Task Analysis Survey to determine what tasks are important and relevant to Oracle Database Administrators as we look to define future Oracle Database Certification and Curriculum Offerings.

We would really appreciate if if you could help us by posting this information on your Database related Blog sites.

Take the Oracle DBA Job Task Survey!…
Are you an Oracle Database Administrator? Would you like to help define the depth and scope of future Oracle Database training and certification? Join with other Oracle experts to take an online survey and tell us what tasks are important to you.Learn More

I took the survey which you can find here last night. Overall I’m happy to help publicize the survey for the following reasons.

Practitioners who actually do the job are probably a better judge of task importance than educators.

Practitioners who actually do the job are probably a better judge of task importance than consultants like me.

The OCP needs improvement and focus to remain of value.

The OCP needs in my opinion anyway to provide proof of practical expertise and not book learning.

All of these issues can be addressed to some extent by actually asking practitioners what they do and don’t find useful in the real world. One thing I wasn’t expecting, but only because I didn’t read the welcome page given how I’d got there, was the fact that you get a download of the 11g Interactive Reference guide at the end of the survey. Anyway please do take the survey, its short, focussed and worthwhile. If like me you occasionally whine about the OCP here’s your chance to contribute at least something.

Possibly Related Posts:

It’s been a few months since I did an install of Oracle Enterprise Manager 11g. I am however talking about some experiences from a real world implementation that I performed a while back at the UKOUG’s Management and Infrastructure SIG on the 27th. (you can book for this event here ) I thought therefore that it made sense to revisit the install again.

For those that don’t know EM 11.1 is built upon Weblogic Server 10.3.2. This is a new technology for Oracle DBA staff and is updated by means of the Smart Update Utility. This utility has to be invoked during the installation of a base weblogic server for Grid control because there are some mandatory patches. The installation guide and various guides you’ll find on the internet talk about using smart update to download those patches from Oracle Support. This isn’t too much of a hassle in my view since you’ll want to allow internet communications with Oracle Support for your OMS server anyway to get the most out of the product.

I duly downloaded WebLogic Server, installed the base product and then ran Smart Update. The very first dialog box presented to me asked me to update the updater (and that happens far too frequently these days in my opinion). This I did. I then reran the updater, but there is now no option for obtaining patches from Oracle Support. It turns out that this is a new feature in Smart Updater 3.3.0, rather than download patches through the updater, all patching is essentially offline. The process is now.

log into MOS via a web browser

find and download the patches you require

copy them to the patch cache directory for each instance of Weblogic Server you have (this can be on shared storage)

manually unzip the patches

run the patch updater and it will detect the patches and allow application.

It’s fair to say that I don’t regard this as an improvement for Grid Control customers, though I can see how it makes sense in the context of Weblogic Server Farms which you don’t expose to the internet generally.

A somewhat recursive post here. There are a number of forums around the internet which Oracle professionals increasingly use to research various issues, discover new features and diagnose problems. One such is the relatively new Database Administrators stack exchange site. For various reasons I came across this thread and in particular a piece of advice on logging long running queries using an autonomous transaction.

Personally, I suggest using autonomous transactions to enable this feature — not on the transaction itself, but as a logging mechanism letting you know what is going on. For example, you could have PROCEDURE LONG_ACTION call PROCEDURE WRITE_LOG_ENTRY (defined as an autonomous transaction) that would write a VARCHAR2 to another table. Autonomous transactions do NOT interfere with your current transaction and so you can see what’s going on via your logging entries regardless of a COMMIT or ROLLBACK in your current transaction. That said, you can’d do that with one massive DML statement; you’d have to use a loop.

Now it seems to me at least that the caveats I have highlighted above are somewhat underplayed. There are 2 issues here:

The original transaction is affected – in fact it is re-written entirely as a series of single row updates in a loop (a.k.a slow by slow) rather than a single update loop.

The result of this is that you will get logs, but the runtime will grow potentially orders of magnitude. This is unlikely to be what you want.

So there you are: 2 pieces of advice on the Internet. How to choose between them? I suggest testing them out. The script below tests the scenario above. I have a ‘hairy calculation’ namely a function to tell whether a given positive integer is prime, I want to run it through the first ten thousand positive integers and update a table of primes accordingly. As sieving through primes is computationally intensive I expect this to take a measurable amount of time. Lets see shall we.

In other words what took just a second without my autonomous transaction took 8 seconds with it. The test is easily reconfigurable though – running with 100000 rows in the table produced the following results.

This post is by way of reply to Richard Foote’s latest quiz available here Richard wants to know how many columns a histogram will be collected on given certain conditions. I believe that the answer depends on the database version (so 9i will behave differently from 10.2 and above for example). For my tests I’m using the script below:

The aim of the script is to run the suggested gather stats routine with no workload, with workloads where each query only queries one column (as per Richard’s example where clause) and where different queries query each of the columns, as per Richard’s description. I then add a single row and see what happens to the histograms.

Update

It is worth comparing the script below with a script that populates data using the mod function.

/*
File : Histograms.sql
Date : Aug 2011 (just)
Purpose: Demo the quiz in http://richardfoote.wordpress.com/2011/08/31/method_opt-size-auto-quiz-automatic-for-the-people/
*/
-- create the table
drop table afp purge;
create table afp(
c1 number
, c2 number
, c3 number);
-- now populate
begin
for i in 1..1000000 loop
insert into afp(c1,c2,c3) values (i,mod(i,254),mod(i,254));
end loop;
commit;
end;
/
prompt 'Print Version'
select banner from v$version;
exec dbms_stats.gather_table_stats(user,'AFP',method_opt => 'FOR ALL COLUMNS SIZE AUTO');
col column_name format a30
prompt 'Histogram Info with no queries'
select
column_name
, count(*)
from user_histograms
where
table_name = 'AFP'
group by
column_name
order by
column_name
;
-- run my sample multicolumn queries
-- use a pl/sql routine so as I get bind vars
-- doesn't have to be
declare
q1count number := 10;
q2count number := 10;
q3count number := 10;
retval number;
begin
-- query 1
for i in 1..q1count loop
select count(*) into retval
from afp where
c1 = round(dbms_random.value(1,100000));
end loop;
for i in 1..q2count loop
select count(*) into retval
from afp where
c2 = round(dbms_random.value(1,100000));
end loop;
for i in 1..q3count loop
select count(*) into retval
from afp where
c3 = round(dbms_random.value(1,100000));
end loop;
end;
/
exec dbms_stats.gather_table_stats(user,'AFP',method_opt => 'FOR ALL COLUMNS SIZE AUTO');
prompt 'Histogram Info with single col queries'
select
column_name
, count(*)
from user_histograms
where
table_name = 'AFP'
group by
column_name
order by
column_name
;
-- run my sample multicolumn queries
-- use a pl/sql routine so as I get bind vars
-- doesn't have to be
declare
q1count number := 10;
q2count number := 10;
q3count number := 10;
retval number;
begin
-- query 1
for i in 1..q1count loop
select count(*) into retval
from afp where
c1 = round(dbms_random.value(1,100000)) and c2 = i*i and c3 = round(dbms_random.normal()*i,0);
end loop;
for i in 1..q2count loop
select count(*) into retval
from afp where
c3 = round(dbms_random.value(1,100000)) and c1 = i*i and c2 = round(dbms_random.normal()*i,0);
end loop;
for i in 1..q3count loop
select count(*) into retval
from afp where
c2 = round(dbms_random.value(1,100000)) and c3 = i*i and c1 = round(dbms_random.normal()*i,0);
end loop;
end;
/
exec dbms_stats.gather_table_stats(user,'AFP',method_opt => 'FOR ALL COLUMNS SIZE AUTO');
prompt 'Histogram Info with queries on all columns'
select
column_name
, count(*)
from user_histograms
where
table_name = 'AFP'
group by
column_name
order by
column_name
;
prompt 'add data '
insert into afp VALUES (1000001, 42, 99999999);
commit;
exec dbms_stats.gather_table_stats(user,'AFP',method_opt => 'FOR ALL COLUMNS SIZE AUTO');
col column_name format a30
prompt 'Histogram Info with no queries'
select
column_name
, count(*)
from user_histograms
where
table_name = 'AFP'
group by
column_name
order by
column_name
;
-- run my sample multicolumn queries
-- use a pl/sql routine so as I get bind vars
-- doesn't have to be
declare
q1count number := 10;
q2count number := 10;
q3count number := 10;
retval number;
begin
-- query 1
for i in 1..q1count loop
select count(*) into retval
from afp where
c1 = round(dbms_random.value(1,100000));
end loop;
for i in 1..q2count loop
select count(*) into retval
from afp where
c2 = round(dbms_random.value(1,100000));
end loop;
for i in 1..q3count loop
select count(*) into retval
from afp where
c3 = round(dbms_random.value(1,100000));
end loop;
end;
/
exec dbms_stats.gather_table_stats(user,'AFP',method_opt => 'FOR ALL COLUMNS SIZE AUTO');
prompt 'Histogram Info with single col queries'
select
column_name
, count(*)
from user_histograms
where
table_name = 'AFP'
group by
column_name
order by
column_name
;
-- run my sample multicolumn queries
-- use a pl/sql routine so as I get bind vars
-- doesn't have to be
declare
q1count number := 10;
q2count number := 10;
q3count number := 10;
retval number;
begin
-- query 1
for i in 1..q1count loop
select count(*) into retval
from afp where
c1 = round(dbms_random.value(1,100000)) and c2 = i*i and c3 = round(dbms_random.normal()*i,0);
end loop;
for i in 1..q2count loop
select count(*) into retval
from afp where
c3 = round(dbms_random.value(1,100000)) and c1 = i*i and c2 = round(dbms_random.normal()*i,0);
end loop;
for i in 1..q3count loop
select count(*) into retval
from afp where
c2 = round(dbms_random.value(1,100000)) and c3 = i*i and c1 = round(dbms_random.normal()*i,0);
end loop;
end;
/
exec dbms_stats.gather_table_stats(user,'AFP',method_opt => 'FOR ALL COLUMNS SIZE AUTO');
prompt 'Histogram Info with queries on all columns'
select
column_name
, count(*)
from user_histograms
where
table_name = 'AFP'
group by
column_name
order by
column_name
;

So in 11.2.0.2 the addition of the single column tips the optimizer over from a frequency histogram to a height balanced histogram, just by virtue of performing the insert. For 10.2.0.4 no difference is seen even after running my sample queries.