Pages

Monday, December 29, 2008

For all Dutch readers interested in the SQL model clause (I wonder how selective those two predicates are ...): the last part of the SQL Model Clause Tutorial is published in the winter edition 2008 of OGh Visie. The subject is "Practicalities of the SQL Model Clause" and it tries to answer the question for which kind of problems a model clause query is best applied.

For all non Dutch readers interested in the SQL model clause: a translation of this article will appear on this blog very soon.

"I have a table that contains a list of items that are to be discussed in a meeting, all taking place in the same room. For some meetings a guest speaker is invited, those meetings have a fixed start time. All meetings have a fixed duration in minutes. The days all start with opening and end with close. How can we generate the start times of the items that have no fixed start time set?

e can start at 09:00 to fill the gap between opening and a.c can start at 09:40 to fill the gap between e and a.f can start at 11:00 to fill the gap between a and lunch.d can start at 11:40 to fill the gap between f and lunch.

more combinations are very legal, as long as the fixed_start_times are honoured.How can this list be generated in 1 sql?When a day becomes overbooked the items that fall must get something like 'does not fit' to signal it's too busy that day."

In real life I would never implement a SQL only solution for such a problem, because the resulting SQL will be quite hard to maintain, even when properly documented. But it sure is fun to do it using only SQL.

If you clicked the link in the beginning of the post, you might have seen a solution already. In this post I will try to explain how I solved this problem which may look very difficult at first. I think it demonstrates the power of the SQL model clause very well.

The first thing to do here, is to normalize the table, by splitting the comma separated string into separate rows, one row for each day. Without such a normalized set, a solution would be much harder. To split the string into several rows, I use the technique that resembles the one described in this post, particularly the SQL model clause variant. This query does this part of the job:

This query puts every row in its own partition, and indexes this row with dimension i set to 0. These rows are the original ones, which we won't return, because of the use of the keyword RETURN UPDATED ROWS. The model creates new cells with dimension values from 1 and upwards. In each partition the number of commas is calculated with regexp_count(days[0],','). The number of commas plus one is the number of rows that have to be generated. For each new row, regexp_substr(days[0],'[^,]+',1,cv(i)) gives the cv(i)-th element in the string.

The less trickier part of the query is to change some datatypes: days is set to varchar2(3) for a prettier layout only, and the start_time and end_time should be dates for easier calculating. You see that a default date is chosen, being the first day of the current month. This part of the date is irrelevant. Using intervals (numtodsinterval) I can now easily add the number of minutes with the start_time to calculate the end_time.

This was the easy part :-). Next challenge is to process all rows with an empty start_time and allocate a time slot to them. The question does not describe an algorithm how to do this, so I choose an easy one: process all rows with an empty start time and start with the longest one. For each of these rows, assign the row to the largest gap in time, at the beginning of the interval. A few challenges here: determining the gaps, adjusting the gaps after an item has been assigned in a gap and detecting when an item doesn't fit in any gap.

For these challenges I put up another model, partitioned by day. The rows inside each partition are indexed by a number with the row_number analytic function, the rows with an empty start_time first. Like this I have a nice dimension I can iterate over AND I can use the UNTIL clause to only iterate over the ones with a empty start_time. For this I have to introduce the cnt measure, that counts the number of empty start_times in a partition. So in each partition there will be as many iterations as there are cells with empty start_times.

Let's first show the resulting query and then explain what's going on:

In the with clause you see the normalized query discussed earlier. The part I'm discussing here, starts at line 18. For each iteration I have to choose which of the rows has the largest open time slot attached at the end. For this the auxiliary measure next_start_time is introduced, calculated with the lead analytic function. The index of the row with the largest open time slot is stored in the auxiliary measure rn_with_largest_gap. Each iteration starts with calculating this value again. All subsequent rules of the models use this rn_with_largest_gap measure. Rules 3 and 4 calculate the start_time and end_time of the rows that started out with an empty start_time. The last two rules adjust the next_start_time measures to the new situation: since the new allocated time slot is adjacent to the existing time slot, the next_start_time of the original one is set to null, and the next_start_time of the new time slot is the original next_start_time.

Last part of the solution is to addition of the fit measure. With this measure and the second rule, I can check whether the largest open time slot is large enough for the item. If it is not large enough, I put in a text "doesn't fit" in this cell. All subsequent rules effectively do nothing when a non null value is encountered in this cell. With the current data, everything fits, but if you click on the link to the original question, you'll see a situation where two rows don't fit.

If you have not given up and have read up through here, you maybe agree with me that this is another nice example of a complex algorithm that can be solved with only SQL. Although in this category, nothing beats this one of course. But remember kids: don't try this at work ;-)

Friday, December 5, 2008

In this AMIS-post, Lucas Jellema was looking for a way to duplicate certain grouping sets for his ADF tree structure. Using ROLLUP this is not completely possible, but with GROUPING SETS it is, as I will show below. It was a small challenge however to distinguish the duplicate sets from each other in the select list and this is where I learned something new.

But there is no way to specify the empty grouping set, as it is implicit with the rollup operator. And so it's also impossible to duplicate this empty grouping set with rollup. It is possible though with the grouping sets notation. First, let's rewrite the original rollup expression to the more tedious but clearer grouping sets notation, like this:

Now my question was: how can I distinguish between the two equal grouping sets in my select list? Not with GROUPING_ID, because I'd have to mention the columns on which were grouped, not the grouping set. A grouping_id(deptno) would yield to 1 in both deptno grouping sets.

Using the row_number analytic function and including all columns present in a grouping set in the partition by clause, was what I thought of next. This would accomplish a "1" for all singular grouping sets, and an arbitrary 1 and 2 within the duplicate grouping sets, like this:

The problem is the ordering. There is nothing to order by, so I chose NULL. Any other constant value would have sufficed as well. But this ordering is arbitrary, meaning that two rows can both be 1 and can both be 2. And some implementation detail of Oracle has to decide which one it's going to be. And if I use the row_number function twice, would both expressions give the same results? It depends on the same implementation, so yes, both expressions give the same results:

rwijk@ORA11GR1> select case grouping_id(deptno,job,empno) 2 when 0 then 'grouped by deptno,job,empno,ename' 3 when 1 then 'grouped by deptno,job' 4 when 3 then 'grouped by deptno, grouping set ' || 5 to_char(2+row_number() over (partition by deptno,job,empno order by null)) 6 when 7 then 'grouped by (), grouping set ' || 7 to_char(4+row_number() over (partition by deptno,job,empno order by null)) 8 end gr_text 9 , case grouping_id(deptno,job,empno) 10 when 0 then 'grouped by deptno,job,empno,ename' 11 when 1 then 'grouped by deptno,job' 12 when 3 then 'grouped by deptno, grouping set ' || 13 to_char(2+row_number() over (partition by deptno,job,empno order by null)) 14 when 7 then 'grouped by (), grouping set ' || 15 to_char(4+row_number() over (partition by deptno,job,empno order by null)) 16 end gr_text2 17 from emp 18 group by grouping sets 19 ( (deptno,job,empno,ename) 20 , (deptno,job) 21 , deptno 22 , deptno 23 , () 24 , () 25 ) 26 order by deptno 27 , job 28 , empno 29 /

GR_TEXT GR_TEXT2--------------------------------- ---------------------------------grouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno, grouping set 3 grouped by deptno, grouping set 3grouped by deptno, grouping set 4 grouped by deptno, grouping set 4grouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno, grouping set 3 grouped by deptno, grouping set 3grouped by deptno, grouping set 4 grouped by deptno, grouping set 4grouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job,empno,ename grouped by deptno,job,empno,enamegrouped by deptno,job grouped by deptno,jobgrouped by deptno, grouping set 3 grouped by deptno, grouping set 3grouped by deptno, grouping set 4 grouped by deptno, grouping set 4grouped by (), grouping set 5 grouped by (), grouping set 5grouped by (), grouping set 6 grouped by (), grouping set 6

31 rijen zijn geselecteerd.

But still I felt uncomfortable with this solution. I was pressing the Previous and Next buttons in the SQL Reference Manual to find any related functions and saw the GROUP_ID function. "This has got to be the most useless function ever" must have been my thoughts when I read about this function years ago. In fact, I had completely forgotten about it. But it's entire reason for being is a case like this with duplicate grouping sets, to be able to distinguish between the sets:

Here I have duplicated the empty grouping set 4 times and you see the outcome of the GROUP_ID() function for those five sets ranging from 0 to 4. This is exactly what is needed here, so now the initial query is quite easy, like this: