You are here

Analytic functions by Example

This article provides a clear, thorough concept of analytic functions and its various options by a series of simple yet concept building examples. The article is intended for SQL coders, who for might be not be using analytic functions due to unfamiliarity with its cryptic syntax or uncertainty about its logic of operation. Often I see that people tend to reinvent the feature provided by analytic functions by native join and sub-query SQL. This article assumes familiarity with basic Oracle SQL, sub-query, join and group function from the reader. Based on that familiarity, it builds the concept of analytic functions through a series of examples.

It is true that whatever an analytic function does can be done by native SQL, with join and sub-queries. But the same routine done by analytic function is always faster, or at least as fast, when compared to native SQL. Moreover, I am not considering here the amount of time that is spent in coding the native SQLs, testing, debugging and tuning them.

<window_clause> is like "ROW <?>" or "RANK <?>"
All the keywords will be dealt in details as we walk through the examples. The script for creating the schema (SCOTT) on which the example queries of this article are run can be obtained in ORACLE_HOME/sqlplus/demo/demobld.sql of any standard Oracle installation.

Consider the Query-1 and its result. Query-1 returns departments and their employee count. Most importantly it groups the records into departments in accordance with the GROUP BY clause. As such any non-"group by" column is not allowed in the select clause.

Now consider the analytic function query (Query-2) and its result. Note the repeating values of DEPT_COUNT column.

This brings out the main difference between aggregate and analytic functions. Though analytic functions give aggregate result they do not group the result set. They return the group value multiple times with each record. As such any other non-"group by" column or expression can be present in the select clause, for example, the column EMPNO in Query-2.

Analytic functions are computed after all joins, WHERE clause, GROUP BY and HAVING are computed on the query. The main ORDER BY clause of the query operates after the analytic functions. So analytic functions can only appear in the select list and in the main ORDER BY clause of the query.

In absence of any PARTITION or <window_clause> inside the OVER( ) portion, the function acts on entire record set returned by the where clause. Note the results of Query-3 and compare it with the result of aggregate function query Query-4.

It might be obvious from the previous example that the clause PARTITION BY is used to break the result set into groups. PARTITION BY can take any non-analytic SQL expression.

Some functions support the <window_clause> inside the partition to further limit the records they act on. In the absence of any <window_clause> analytic functions are computed on all the records of the partition clause.

The functions SUM, COUNT, AVG, MIN, MAX are the common analytic functions the result of which does not depend on the order of the records.

Functions like LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, FIRST, FIRST VALUE, LAST, LAST VALUE depends on order of records. In the next example we will see how to specify that.

The answer is simple, by the "ORDER BY" clause inside the OVER( ) clause. This is different from the ORDER BY clause of the main query which comes after WHERE. In this section we go ahead and introduce each of the very useful functions LEAD, LAG, RANK, DENSE_RANK, ROW_NUMBER, FIRST, FIRST VALUE, LAST, LAST VALUE and show how each depend on the order of the record.

The general syntax of specifying the ORDER BY clause in analytic function is:

All the above three functions assign integer values to the rows depending on their order. That is the reason of clubbing them together.

ROW_NUMBER( ) gives a running serial number to a partition of records. It is very useful in reporting, especially in places where different partitions have their own serial numbers. In Query-5, the function ROW_NUMBER( ) is used to give separate sets of running serial to employees of departments 10 and 20 based on their HIREDATE.

RANK and DENSE_RANK both provide rank to the records based on some column value or expression. In case of a tie of 2 records at position N, RANK declares 2 positions N and skips position N+1 and gives position N+2 to the next record. While DENSE_RANK declares 2 positions N but does not skip position N+1.

Query-6 shows the usage of both RANK and DENSE_RANK. For DEPTNO 20 there are two contenders for the first position (EMPNO 7788 and 7902). Both RANK and DENSE_RANK declares them as joint toppers. RANK skips the next value that is 2 and next employee EMPNO 7566 is given the position 3. For DENSE_RANK there are no such gaps.

LEAD has the ability to compute an expression on the next rows (rows which are going to come after the current row) and return the value to the current row. The general syntax of LEAD is shown below:

LEAD (<sql_expr>, <offset>, <default>) OVER (<analytic_clause>)

<sql_expr> is the expression to compute from the leading row.
<offset> is the index of the leading row relative to the current row.
<offset> is a positive integer with default 1.
<default> is the value to return if the <offset> points to a row outside the partition range.

The syntax of LAG is similar except that the offset for LAG goes into the previous rows.Query-7 and its result show simple usage of LAG and LEAD function.

The FIRST_VALUE analytic function picks the first record from the partition after doing the ORDER BY. The <sql_expr> is computed on the columns of this first record and results are returned. The LAST_VALUE function is used in similar context except that it acts on the last record of the partition.

The FIRST function (or more properly KEEP FIRST function) is used in a very special situation. Suppose we rank a group of record and found several records in the first rank. Now we want to apply an aggregate function on the records of the first rank. KEEP FIRST enables that.

The general syntax is:

Function( ) KEEP (DENSE_RANK FIRST ORDER BY <expr>) OVER (<partitioning_clause>)

Please note that FIRST and LAST are the only functions that deviate from the general syntax of analytic functions. They do not have the ORDER BY inside the OVER clause. Neither do they support any <window> clause. The ranking done in FIRST and LAST is always DENSE_RANK. The query below shows the usage of FIRST function. The LAST function is used in similar context to perform computations on last ranked records.

Some analytic functions (AVG, COUNT, FIRST_VALUE, LAST_VALUE, MAX, MIN and SUM among the ones we discussed) can take a window clause to further sub-partition the result and apply the analytic function. An important feature of the windowing clause is that it is dynamic in nature.

The general syntax of the <window_clause> is

[ROW or RANGE] BETWEEN <start_expr> AND <end_expr>

<start_expr> can be any one of the following

UNBOUNDED PECEDING

CURRENT ROW

<sql_expr> PRECEDING or FOLLOWING.

<end_expr> can be any one of the following

UNBOUNDED FOLLOWING or

CURRENT ROW or

<sql_expr> PRECEDING or FOLLOWING.

For ROW type windows the definition is in terms of row numbers before or after the current row. So for ROW type windows <sql_expr> must evaluate to a positive integer.

For RANGE type windows the definition is in terms of values before or after the current ORDER. We will take this up in details latter.

The ROW or RANGE window cannot appear together in one OVER clause. The window clause is defined in terms of the current row. But may or may not include the current row. The start point of the window and the end point of the window can finish before the current row or after the current row. Only start point cannot come after the end point of the window. In case any point of the window is undefined the default is UNBOUNDED PRECEDING for <start_expr> and UNBOUNDED FOLLOWING for <end_expr>.

If the end point is the current row, syntax only in terms of the start point can be can be[ROW or RANGE] [<start_expr> PRECEDING or UNBOUNDED PRECEDING ]

[ROW or RANGE] CURRENT ROW is also allowed but this is redundant. In this case the function behaves as a single-row function and acts only on the current row.

Function( ) OVER (PARTITIN BY <expr1> ORDER BY <expr2,..> ROWS BETWEEN <start_expr> AND <end_expr>)
or Function( ) OVER (PARTITON BY <expr1> ORDER BY <expr2,..> ROWS [<start_expr> PRECEDING or UNBOUNDED PRECEDING]

For ROW type windows the windowing clause is in terms of record numbers.

The query Query-10 has no apparent real life description (except column FROM_PU_C) but the various windowing clause are illustrated by a COUNT(*) function. The count simply shows the number of rows inside the window definition. Note the build up of the count for each column for the YEAR 1981.

The column FROM_P3_TO_F1 shows an example where start point of the window is before the current row and end point of the window is after current row. This is a 5 row window; it shows values less than 5 during the beginning and end.

The column FROM_PU_TO_CURR shows an example where start point of the window is before the current row and end point of the window is the current row. This column only has some real world significance. It can be thought of as the yearly employee build-up of the organization as each employee is getting hired.

The column FROM_P2_TO_P1 shows an example where start point of the window is before the current row and end point of the window is before the current row. This is a 3 row window and the count remains constant after it has got 3 previous rows.

The column FROM_F1_TO_F3 shows an example where start point of the window is after the current row and end point of the window is after the current row. This is a reverse of the previous column. Note how the count declines during the end.

Function( ) OVER (PARTITION BY <expr1> ORDER BY <expr2> RANGE BETWEEN <start_expr> AND <end_expr>)
or
Function( ) OVER (PARTITION BY <expr1> ORDER BY <expr2> RANGE [<start_expr> PRECEDING or UNBOUNDED PRECEDING]

For <start_expr> or <end_expr> we can use UNBOUNDED PECEDING, CURRENT ROW or <sql_expr> PRECEDING or FOLLOWING. However for RANGE type windows <sql_expr> must evaluate to value compatible with ORDER BY expression <expr1>.

<sql_expr> is a logical offset. It must be a constant or expression that evaluates to a positive numeric value or an interval literal. Only one ORDER BY expression is allowed.

If <sql_expr> evaluates to a numeric value, then the ORDER BY expr must be a NUMBER or DATE datatype. If <sql_expr> evaluates to an interval value, then the ORDER BY expr must be a DATE datatype.

Note the example (Query-11) below which uses RANGE windowing. The important thing here is that the size of the window in terms of the number of records can vary.

Defining the PARTITOIN BY and ORDER BY clauses on indexed columns (ordered in accordance with the PARTITION CLAUSE and then the ORDER BY clause in analytic function) will provide optimum performance. For Query-5, for example, a composite index on (deptno, hiredate) columns will prove effective.

It is advisable to always use CBO for queries using analytic functions. The tables and indexes should be analyzed and optimizer mode should be CHOOSE.

Even in absence of indexes analytic functions provide acceptable performance but need to do sorting for computing partition and order by clause. If the query contains multiple analytic functions, sorting and partitioning on two different columns should be avoided if they are both not indexed.

The aim of this article is not to make the reader try analytic functions forcibly in every other complex SQL. It is meant for a SQL coder, who has been avoiding analytic functions till now, even in complex analytic queries and reinventing the same feature much painstakingly by native SQL and join query. Its job is done if such a person finds analytic functions clear, understandable and usable after going through the article, and starts using them.

Comments

Most of the examples that I have come across deal with simple dates. What about when you have a data set that contains records with timestamps and you would like to roll them up to the second and then look at a sliding window to find, say the busiest 5 minutes of the day? It is trivial to create the query to aggregate the data into per-second blocks but I cannot figure out the syntax to put a window around it.

SELECT TO_CHAR(timepoint,'DD.MM.YYYY HH24:MI:SS.FF3') "Start_bussiest_5sec_interval"
FROM
(SELECT timepoint,
DENSE_RANK() over (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint,timepoint) rank
FROM
(SELECT timepoint,
COUNT(*) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) forw_dens_per_5sec,
MAX(timepoint) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) end_time_5sec
FROM trt
)
)
WHERE rank=1;

The output of your query is: Start_bussiest_5sec_interval=
16.09.2010 07:09:35.796. But I guess this is not the starting of the busiest 5 sec interval. I guess it should be = 16.09.2010 07:09:30.921. If I am correct than your query should be modifed here:

SELECT *
FROM
(SELECT
DENSE_RANK() over (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint,timepoint) rank,
s.*,
end_time_5sec-timepoint dff
FROM
(SELECT timepoint,
COUNT(*) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) forw_dens_per_5sec,
MAX(timepoint) over (order by timepoint range BETWEEN CURRENT row AND interval '5' second following ) end_time_5sec
FROM trt
) s
)ss
--WHERE rank=1
;

The density is 5 records per interval for both (my and your solution) but mine has minor diff between first member and last one (order by forw_dens_per_5sec DESC,end_time_5sec-timepoint,timepoint).

When I was stuck with the oracle documentation reading about analytic functions, being described highly formally, I got this
excellent article describing complex things in really simple terms with examples.

-- How many days after the first hire of each department were the next employees hired?
SELECT empno, deptno, hiredate ? FIRST_VALUE(hiredate)
OVER (PARTITION BY deptno ORDER BY hiredate) DAY_GAP
FROM emp
WHERE deptno IN (20, 30)
ORDER BY deptno, DAY_GAP;

I think there is an error in this query. It does not return the same results as shown in the example.
Thanks in advance

I was trying to use the dense_rank function to tune a query but really couldn't understand the Oracle doco - your page helped me make the breakthrough and my query runs instantly now whereas it took an hour before.

SELECT empno, deptno, sal,
last_value(sal)
OVER (PARTITION BY deptno order by sal desc) col1,
first_value(sal)
OVER (PARTITION BY deptno order by sal asc) col2,
first_value(sal)
OVER (PARTITION BY deptno order by sal desc) col3,
last_value(sal)
OVER (PARTITION BY deptno order by sal asc) col4
FROM emp

Broadly speaking,
All analytic functions with ORDER BY operate on a default window - a subset of contiguous rows (according to the analytic ORDER BY clause) within the partition. The default windowing clause is "RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW". This affects the LAST_VALUE.

To get the results as you want you need to have.
(PARTITION BY deptno order by sal desc ROWS
BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING).

I am finding it hard to understand this can someone explain how its counting the last column
Based on what I understand "CNT_LT_HALF" divides the curron row sal and compares with precedding sal row values and that looks fine (not the count does not include the existing row in its count so its less that not less than equal to)
however when I try to apply the same for "CNT_MT_HALF" colum
say take current rows sal/2 eg. 800/2 = 400
now within dept = 20 if you compare this value with all the following sal then for the first row "CNT_LT_HALF" column should have a value of "4" and not "3". I ran the query in Oracle database and the query matches what the article output is ("3"). I know I am understanding it wrong, can someone explain

I had a really slow sql query that had columns of effecivedate, amount and running total to date.
by using analytic function I was able to make a 90 second query a < 2 second query with following syntax.

SELECT effectivedate, amount, SUM(amount) OVER (PARTITION BY sysdate ORDER BY effectivedate)
FROM mytable
ORDER BY effectivedate DESC

I think this little snippet will be useful in many applications.
Truly enyoyed your article which gave me the idea to try it this way! Thanks so much!

Am I doing something wrong?
I have table a and table b, the relation between them is one to many. In table a I have column tax, in table b I have unit price and qty, what I need to get by my query is avg tax, total (sum) tax and sum of price*qty. I have problem with avg_tax_chg and doc_$_tot when join tables. Can somebody help?
select a.doc_num, a.tax_chg,
avg(a.tax_chg) over ( ) avg_tax_chg,
sum(a.tax_chg) over (partition by a.ord_tp_cd,a.final_dt,a.stat_cd) tot_tax_chg,
sum(b.qty*b.unit_prc) over (partition by s.del_doc_num) doc_$_tot
from a, b
where a.doc_num = b.doc_num

Mr. Swainprafullaranjan, the only obscurity with the text on this page lay within the content of your comment. You provide incomplete sentences with the majority of words mis-spelled and absolutely no reference to what part of the article text that you find unclear or improper .

Had you written your thoughts truly as just a comment I would have refrained from mine. Since you felt obliged to lecture the author on improvement before writing further topics, I feel absolutely obliged to provide the same suggestion to you before critiquing any future article, post, or comment on this site.

Shouvic Basu effectively conveyed in his article, completely, what his topic statement intended- to provide ".. a clear, thorough concept of analytic functions and its various options by a series of simple yet concept building examples".

The flow of his presentation demonstrates a unique ability that I would coin as "precision in detail" of the subject matter. The quantity of detail and selection of organization in my opinion produced an article of many abilities - readability, learnability, capability... and enjoyability - a rare treat in reference material work.

I believe this view is the shared consensus amongst his readers. I can relate exactly to the response by Luke Curran: "Excellent article. I personally learn best with simple examples to demonstrate potentially complex concepts ... and this article was perfect. Thanks. "

Shouvic, I appreciate with great admiration your knowledge sharing and look forward to all future topics that you chose to cover. I have been working extensively with analytic functions on my current work projects and this article filled in a lot of background for me as well as new areas to further this study.

As an author of a chess novel myself, there were periods -painstaking times -in my writing that in retrospect had I posessed the same ability to convey detail as you with such precision, I believe that I would not be dyeing as much grey hair today!

I decided to learn analytical functions conceptually and luckily I got this article. Its very useful. The way of treating the subject is very effective. While reading, I tried them practically and now I am quite confident about their usage.

Here in this example, we have column names as gl, curr, day, amount, month, year ; taken from a 1 transaction table.
Now, To calculate ytd (year to-date calculation), i have to calculate month first and then year.

Step 1: Month wise summation :
I have to calculate sum for each day of the month.

Step 2: Year wise summation : YTD Calculation :
We have done Step1 process to calculate this Step2 process, i.e., YTD Calculation.
So, we have to do Year wise summation with the Step1 month wise data ( with the above output date ).

So for year to-date calculation, we have to sum all the dates for a month and then sum all the month to get ytd.
How can i do this ytd using any analytical functions (i.e., i have to sum up for every row) ?

With 16 years of Oracle experience, I would just stay away from "partion by" and "over" clause, because I never really understood what it did. Your article is an eye opener for me. Yeah, promise I will buy your book if you write one!!

I came across this article back in 2009 and printed it out (wanted to make sure that if the post was ever removed, I would still this vital information). I have referred back to these examples many times as a reference, they have helped me numerous times in solving a problem. Just wanted to let you know that an article you wrote in 2004 is still being used today. Thank you.

Its very easy to follow your article.
But i am not aware about the concept ROWS AND RANGE.
In your example what does the "3 PRECEDING AND 1 FOLLOWING" refers and how it works in the below syntax ? I tried lot to get it, but in vain.
Kindly clear me about it. It will be very helpfull

Thanks for great blog :-) I want to ask, if it is possible to do some kind of partition ordering. I know how to order records in partitions created by partition by and order clause, I can assign row numbering specific for partition. What I want is to order these partitions according to first record in that partition.

I can't find any documentation to create two ranks based on single field, like rank() over (partition by field order by date where date > sysdate) rank_1, rank() over (partition by field order by date where date <= sysdate) rank_2.

One of my reports have a requirement to derive statistics by Last of something where the conditions will defer case to case. The example on partition by has helped me a lot. I thought I would have to do that report entirely in PL/SQL :) This page has definitely saved me a lot of time.

You can just have a look at the Query-5 example!
The SRLNO field is what you ask for.

SELECT empno, deptno, hiredate, ROW_NUMBER( ) OVER (PARTITION BY deptno ORDER BY hiredate NULLS LAST) >>> SRLNO <<< -- this is the select list
FROM emp
WHERE deptno IN (10, 20)
ORDER BY deptno, >>> SRLNO <<