I want to get the UserId, Value for the max(Date) for each UserId. That is, the Value for each UserId that has the latest date. Is there a way to do this simply in SQL? (Preferably Oracle)

Update: Apologies for any ambiguity: I need to get ALL the UserIds. But for each UserId, only that row where that user has the latest date.

30 comments

@kiruba 2020-05-14 10:58:07

Below query can work :

SELECT user_id, value, date , row_number() OVER (PARTITION BY user_id ORDER BY date desc) AS rn
FROM table_name
WHERE rn= 1

@David Aldridge 2008-09-23 14:41:11

This will retrieve all rows for which the my_date column value is equal to the maximum value of my_date for that userid. This may retrieve multiple rows for the userid where the maximum date is on multiple rows.

"using analytic queries and a self-join defeats the purpose of analytic queries"

There is no self-join in this code. There is instead a predicate placed on the result of the inline view that contains the analytic function -- a very different matter, and completely standard practice.

"The default window in Oracle is from the first row in the partition to the current one"

The windowing clause is only applicable in the presence of the order by clause. With no order by clause, no windowing clause is applied by default and none can be explicitly specified.

The code works.

@Derek Mahar 2011-04-15 23:59:23

When applied to a table having 8.8 million rows, this query took half the time of the queries in some the other highly voted answers.

@Cory Kendall 2012-03-18 00:38:02

What indexes should I use to make this query (specifically this query) go faster? I'm a bit over my head and using Oracle right now. In a table with 5.5 million rows, this call isn't returning over 30 seconds, and I was hoping for ~100ms or less on this call.

@Falco 2014-05-06 14:14:00

I think you'd have to use an combined index over userid and my_date, so the database can completely use the index to get you the results fast and only read the relevant rows

@redolent 2015-01-10 02:35:19

Anyone care to post a link to the MySQL equivalent of this, if there is one?

@jastr 2016-06-15 19:30:35

Couldn't this return duplicates? Eg. if two rows have the same user_id and the same date (which happens to be the max).

@David Aldridge 2016-06-17 15:47:20

@jastr I think that was acknowledged in the question

@jastr 2016-06-20 17:21:46

@DavidAldridge Are you referring to "That column is likely unique"?

@MT0 2016-06-27 08:13:48

Instead of MAX(...) OVER (...) you can also use ROW_NUMBER() OVER (...) (for the top-n-per-group) or RANK() OVER (...) (for the greatest-n-per-group).

@Mat M 2018-02-14 13:56:01

Is there a way to run the inner query without having to display the max(value) ? I am in the case where I don't have a where clause (I want all matching rows and no duplicates can be), but I would prefer to not display the max value.

@praveen 2018-10-16 07:08:07

SELECT a.userid,a.values1,b.mm
FROM table_name a,(SELECT userid,Max(date1)AS mm FROM table_name GROUP BY userid) b
WHERE a.userid=b.userid AND a.DATE1=b.mm;

@einverne 2018-10-16 09:45:01

While this might answer the authors question, it lacks some explaining words and links to documentation. Raw code snippets are not very helpful without some phrases around it. You may also find how to write a good answer very helpful. Please edit your answer.

@Bill Karwin 2008-09-23 20:01:21

I see many people use subqueries or else vendor-specific features to do this, but I often do this kind of query without subqueries in the following way. It uses plain, standard SQL so it should work in any brand of RDBMS.

An outer join attempts to join t1 with t2. By default, all results of t1 are returned, and if there is a match in t2, it is also returned. If there is no match in t2 for a given row of t1, then the query still returns the row of t1, and uses NULL as a placeholder for all of t2's columns. That's just how outer joins work in general.

The trick in this query is to design the join's matching condition such that t2 must match the sameuserid, and a greaterdate. The idea being if a row exists in t2 that has a greater date, then the row in t1 it's compared against can't be the greatest date for that userid. But if there is no match -- i.e. if no row exists in t2 with a greater date than the row in t1 -- we know that the row in t1 was the row with the greatest date for the given userid.

In those cases (when there's no match), the columns of t2 will be NULL -- even the columns specified in the join condition. So that's why we use WHERE t2.UserId IS NULL, because we're searching for the cases where no row was found with a greater date for the given userid.

@Justin Noel 2011-01-13 02:07:26

Wow Bill. This is the most creative solution to this problem I've seen. It is pretty performant too on my fairly large data set. This sure beats many of the other solutions I've seen or my own attempts at solving this quandary.

@Derek Mahar 2011-04-15 23:11:02

When applied to a table having 8.8 million rows, this query took almost twice as long as that in the accepted answer.

@Bill Karwin 2011-04-19 17:30:15

@Derek: Optimizations depend on the brand and version of RDBMS, as well as presence of appropriate indexes, data types, etc.

@Derek Mahar 2011-04-19 17:56:02

Bill, I ran my test on an Oracle 10 database server (tag on question assumes Oracle) with index on a column analagous to UserId and a compound index that includes a column analagous to Date. Perhaps the query would take less time with an index that includes only Date.

@Jesse 2012-02-22 06:22:10

On MySQL, this kind of query appears to actually cause it to loop over the result of a Cartesian join between the tables, resulting in O(n^2) time. Using the subquery method instead reduced the query time from 2.0s to 0.003s. YMMV.

@Bill Karwin 2012-02-28 17:36:06

@Jesse: on MySQL, all joins are nested-loop joins. If you have an index on (UserId,Date) in this case, you should be able to achieve an index-only join and speed it up a great deal.

@Cory Kendall 2012-03-17 08:18:04

Is there a way to adapt this to match rows where date is the greatest date less than or equal to a user given date? For example if the user gives the date "23-OCT-2011", and the table includes rows for "24-OCT-2011", "22-OCT-2011", "20-OCT-2011", then I want to get "22-OCT-2011". Been scratching my head and reading this snippet for a while now...

@Bill Karwin 2012-03-17 16:59:26

@CoryKendall, add conditions for both t1 and t2 to the join condition: AND t1.Date <= '2011-10-23' AND t2.Date <= '2011-10-23' in addition to the other join conditions I have shown above.

@Axel Fontaine 2013-01-15 13:51:25

Replace table AS t1 by table t1 to make it work on all DBMSs, including Oracle (fails with AS).

@ADTC 2014-01-16 06:46:55

@BillKarwin "add conditions for both t1 and t2 to the join condition" -- This doesn't seem to work (incorrect results)! What I did instead was use the subquery modularization: WITH subq AS (SELECT * FROM mytable WHERE "Date" <= '2011-10-23') SELECT t1.* FROM subq t1 LEFT OUTER JOIN subq t2 ON ( [...] This works because only filtered data is provided as input to the left outer join. It also has the added advantage of providing the condition only once.

@Bill Karwin 2014-01-16 16:44:41

@ADTC, good solution! I work with MySQL more frequently, and MySQL doesn't support WITH expressions yet.

@ADTC 2014-01-16 17:45:58

That's really sad because the main problem with SQL is the lack of modularization, but the WITH construct somehow eases the pain by providing a basic layer of modularization. It should really be a standard SQL (if it's not already). Btw, your original proposal did not seem to give the correct results in Postgres. Does it give the correct results in MySQL?

@Bill Karwin 2014-01-16 17:50:51

@ADTC, yes, the WITH construct is part of SQL:2003. MySQL development has focused for the last ~5 years focusing on improving performance and scalability by changing code deep in the storage engines, but they have done less work adding SQL features.

@Bill Karwin 2014-06-25 17:48:43

@David Mann 2014-06-25 18:57:02

@BillKarwin Ah sure, the outer join is an exclusion join. I guess I meant to ask if there was a name for the approach of using an exclusion join with some condition that lets one solve a 'greatest-n-per-gorup' problem

@Bill Karwin 2014-06-25 18:57:49

@DavidMann, oh, I don't know if this has a particular pattern name.

@danihodovic 2015-02-21 18:02:56

@Bill Karwin 2015-02-21 18:35:34

@dani-h, if t1.date > t2.date, and there are only two rows, then yes of course t2.* would return NULL. But t2 could be any row with the same userid. If t2 matches even one row with a greater date, then t2.* will return non-NULL. Only if t1 has a greater date than all rows matched by t2, does t2.* return NULL. Does that help?

@danihodovic 2015-02-21 21:14:24

@BillKarwin Thanks for attempting to explain this, but I think you've confused me even more :]. A left join is a similar to a cartesian join, yes? Meaning that all rows in t1 are mixed with all rows in t2, where the id matches. If t2.date > t1.date it returns the row in t1 joined by the row in t2. If t1.date > t2.date then there is no match on the right hand side, shouldn't it return NULL for these values as well?

@Bill Karwin 2015-02-22 09:29:34

@dani-h, Suppose you have three rows: January 1, February 1, and March1. Suppose t1 points to February 1. You join t1 to the set of rows with a greater date, and call it t2. The first row (January 1) is not greater, so it is not in that set. Does the join therefore return NULL? No -- because the third row (March 1) is greater than t1 and is in the set of t2. Therefore t1 referencing February 1 is not the row with the greatest date. Only when t1 references March 1, and no row is found that is greater, does t2 return NULLs, and t1 is the greatest.

@frank 2015-09-07 16:06:03

@BillKarwin. I am newbie to SQL. Trying to understand the solution. I was wondering why do we need a WHERE clause. Can't we put the where condition directly in the on clause. i.e ON (t1.UserId = t2.UserId AND t1."Date" < t2."Date" AND t2.UserId IS NULL). can you please explain?

@Bill Karwin 2015-09-07 18:01:30

@frank, because t2.UserId is not null until after the outer join has been evaluated. Please study about outer joins.

@Jon Kloske 2016-06-06 01:53:17

This performs terribly on some RDBMSs, but I upvoted it anyway because it's a fresh and awesome way to think about the problem!

@Bill Karwin 2016-06-06 03:37:13

@JonKloske since answering this question in 2008, I have found the performance has a lot to do with the data. I.e. how many rows per distinct UserId. Anyway, it's almost always a better solution than correlated subqueries.

@Jon Kloske 2016-06-07 21:53:56

yep, very much depends on how easy it is to join with an index, too. If for example you have datetime log data and you're grouping by date(datetime), in mysql at least its not indexable so it's O(n^2), which is worse than some subquery approaches, but as they're all terrible for large rowcounts anyway it doesn't matter much practically. And obviously that's not oracle, though I haven't tested that, maybe that case is bad there too.

@Jon Kloske 2016-06-07 21:57:13

(I found a very quick O(n) solution for that case in mysql that I haven't seen anywhere on SO for those type of questions that also works generally for any type of 'select max or min row' query that also makes it easy to pluck out both in the same row at no extra cost, but er, to paraphrase Fermat, the details are too big to fit in this margin!!!)

@a_horse_with_no_name 2016-08-30 08:43:44

"t uses plain, standard SQL" - window functions are standard SQL and are not "vendor specific". They have been part of the SQL standard since 2003

@Cito 2011-11-01 13:22:23

With PostgreSQL 8.4 or later, you can use this:

select user_id, user_value_1, user_value_2
from (select user_id, user_value_1, user_value_2, row_number()
over (partition by user_id order by user_date desc)
from users) as r
where r.row_number=1

@markusk 2017-10-31 13:07:35

Use ROW_NUMBER() to assign a unique ranking on descending Date for each UserId, then filter to the first row for each UserId (i.e., ROW_NUMBER = 1).

@praveen 2017-09-08 09:50:37

SELECT a.*
FROM user a INNER JOIN (SELECT userid,Max(date) AS date12 FROM user1 GROUP BY userid) b
ON a.date=b.date12 AND a.userid=b.userid ORDER BY a.userid;

@Natty 2017-05-11 11:18:22

Use the code:

select T.UserId,T.dt from (select UserId,max(dt)
over (partition by UserId) as dt from t_users)T where T.dt=dt;

This will retrieve the results, irrespective of duplicate values for UserId.
If your UserId is unique, well it becomes more simple:

select UserId,max(dt) from t_users group by UserId;

@Gurwinder Singh 2017-03-26 12:07:40

In Oracle 12c+, you can use Top n queries along with analytic function rank to achieve this very concisely without subqueries:

select *
from your_table
order by rank() over (partition by user_id order by my_date desc)
fetch first 1 row with ties;

The above returns all the rows with max my_date per user.

If you want only one row with max date, then replace the rank with row_number:

select *
from your_table
order by row_number() over (partition by user_id order by my_date desc)
fetch first 1 row with ties;

@Smart003 2015-06-17 10:09:08

check this link if your questions seems similar to that page then i would suggest you the following query which will give the solution for that link

select distinct sno,item_name,max(start_date) over(partition by sno),max(end_date) over(partition by sno),max(creation_date) over(partition by sno),
max(last_modified_date) over(partition by sno)
from uniq_select_records
order by sno,item_name asc;

will given accurate results related to that link

@Bruno Calza 2014-11-11 18:53:58

If you're using Postgres, you can use array_agg like

SELECT userid,MAX(adate),(array_agg(value ORDER BY adate DESC))[1] as value
FROM YOURTABLE
GROUP BY userid

I'm not familiar with Oracle. This is what I came up with

SELECT
userid,
MAX(adate),
SUBSTR(
(LISTAGG(value, ',') WITHIN GROUP (ORDER BY adate DESC)),
0,
INSTR((LISTAGG(value, ',') WITHIN GROUP (ORDER BY adate DESC)), ',')-1
) as value
FROM YOURTABLE
GROUP BY userid

Both queries return the same results as the accepted answer. See SQLFiddles:

@aLevelOfIndirection 2014-07-21 08:27:20

I'm quite late to the party but the following hack will outperform both correlated subqueries and any analytics function but has one restriction: values must convert to strings. So it works for dates, numbers and other strings. The code does not look good but the execution profile is great.

select
userid,
to_number(substr(max(to_char(date,'yyyymmdd') || to_char(value)), 9)) as value,
max(date) as date
from
users
group by
userid

The reason why this code works so well is that it only needs to scan the table once. It does not require any indexes and most importantly it does not need to sort the table, which most analytics functions do. Indexes will help though if you need to filter the result for a single userid.

@Used_By_Already 2014-08-07 07:11:02

It is a good execution plan compared to most, but applying all those tricks to more then a few fields would be tedious and may work against it. But very interesting - thanks. see sqlfiddle.com/#!4/2749b5/23

@aLevelOfIndirection 2014-08-13 15:07:58

You are right it can become tedious, which is why this should be done only when the performance of the query requires it. Such is often the case with ETL scripts.

@Bruno Calza 2014-11-13 13:26:58

this is very nice. did something similar using LISTAGG but looks ugly. postgres has a better altenative using array_agg. see my answer :)

@Ben Lin 2013-08-30 18:36:07

Solution for MySQL which doesn't have concepts of partition KEEP, DENSE_RANK.

select userid,
my_date,
...
from
(
select @sno:= case when @pid<>userid then 0
else @sno+1
end as serialnumber,
@pid:=userid,
my_Date,
...
from users order by userid, my_date
) a
where a.serialnumber=0

@a_horse_with_no_name 2013-08-30 18:55:03

This does not work "on other DBs too". This only works on MySQL and possibly on SQL Server because it has a similar concept of variables. It will definitely not work on Oracle, Postgres, DB2, Derby, H2, HSQLDB, Vertica, Greenplum. Additionally the accepted answer is standard ANSI SQL (which by know only MySQL doesn't support)

@Ben Lin 2013-09-05 16:28:38

horse, I guess you are right. I don't have knowledge about other DBs, or ANSI. My solution is able to solve the issue in MySQL, which doesn't have proper support for ANSI SQL to solve it in standard way.

@user11318 2008-09-23 15:47:54

I don't have Oracle to test it, but the most efficient solution is to use analytic queries. It should look something like this:

Under the hood analytic queries sort the whole dataset, then process it sequentially. As you process it you partition the dataset according to certain criteria, and then for each row looks at some window (defaults to the first value in the partition to the current row - that default is also the most efficient) and can compute values using a number of analytic functions (the list of which is very similar to the aggregate functions).

In this case here is what the inner query does. The whole dataset is sorted by UserId then Date DESC. Then it processes it in one pass. For each row you return the UserId and the first Date seen for that UserId (since dates are sorted DESC, that's the max date). This gives you your answer with duplicated rows. Then the outer DISTINCT squashes duplicates.

This is not a particularly spectacular example of analytic queries. For a much bigger win consider taking a table of financial receipts and calculating for each user and receipt, a running total of what they paid. Analytic queries solve that efficiently. Other solutions are less efficient. Which is why they are part of the 2003 SQL standard. (Unfortunately Postgres doesn't have them yet. Grrr...)

@David Aldridge 2008-09-23 18:01:48

You also need to return the date value to answer the question completely. If that means another first_value clause then I'd suggest that the solution is more complex than it ought to be, and the analytic method based on max(date) reads better.

@user11318 2008-09-23 18:11:51

The question statement says nothing about returning the date. You can do that either by adding another FIRST(Date) or else just by querying the Date and changing the outer query to a GROUP BY. I'd use the first and expect the optimizer to calculate both in one pass.

@David Aldridge 2008-09-23 18:18:14

"The question statement says nothing about returning the date" ... yes, you're right. Sorry. But adding more FIRST_VALUE clauses would become messy pretty quickly. It's a single window sort, but if you had 20 columns to return for that row then you've written a lot of code to wade through.

@David Aldridge 2008-09-23 18:22:22

It also occurs to me that this solution is non-deterministic for data where a single userid has multiple rows that have the maximum date and different VALUEs. More a fault in the question than the answer though.

@user11318 2008-09-23 19:51:21

I agree it is painfully verbose. However isn't that generally the case with SQL? And you're right that the solution is non-deterministic. There are multiple ways to deal with ties, and sometimes each is what you want.

@Amitābha 2013-04-21 02:36:36

select UserId,max(Date) over (partition by UserId) value from users;

@Jon Heller 2013-04-21 04:05:09

This will return all rows, not just one row per user.

@nouky 2011-11-23 13:47:44

select VALUE from TABLE1 where TIME =
(select max(TIME) from TABLE1 where DATE=
(select max(DATE) from TABLE1 where CRITERIA=CRITERIA))

@wcw 2011-10-19 16:17:08

For context, on Teradata here a decent size test of this runs in 17s with this QUALIFY version and in 23s with the 'inline view'/Aldridge solution #1.

@cartbeforehorse 2012-05-26 13:18:16

This is the best answer in my opinion. However, be careful with the rank() function in situations where there are ties. You could end up with more than one rank=1. Better to use row_number() if you really do want just one record returned.

@cartbeforehorse 2012-05-26 13:40:06

Also, be aware that the QUALIFY clause is specific to Teradata. In Oracle (at least) you have to nest your query and filter using a WHERE clause on the wrapping select statement (which probably hits performance a touch, I'd imagine).

@Truper 2010-06-29 13:45:48

Just had to write a "live" example at work :)

This one supports multiple values for UserId on the same date.

Columns:
UserId, Value, Date

SELECT
DISTINCT UserId,
MAX(Date) OVER (PARTITION BY UserId ORDER BY Date DESC),
MAX(Values) OVER (PARTITION BY UserId ORDER BY Date DESC)
FROM
(
SELECT UserId, Date, SUM(Value) As Values
FROM <<table_name>>
GROUP BY UserId, Date
)

You can use FIRST_VALUE instead of MAX and look it up in the explain plan. I didn't have the time to play with it.

Of course, if searching through huge tables, it's probably better if you use FULL hints in your query.

@Mauro 2010-05-02 15:12:43

Just tested this and it seems to work on a logging table

select ColumnNames, max(DateColumn) from log group by ColumnNames order by 1 desc

@Guus 2010-04-28 17:04:23

The answer here is Oracle only. Here's a bit more sophisticated answer in all SQL:

Who has the best overall homework result (maximum sum of homework points)?

SELECT FIRST, LAST, SUM(POINTS) AS TOTAL
FROM STUDENTS S, RESULTS R
WHERE S.SID = R.SID AND R.CAT = 'H'
GROUP BY S.SID, FIRST, LAST
HAVING SUM(POINTS) >= ALL (SELECT SUM (POINTS)
FROM RESULTS
WHERE CAT = 'H'
GROUP BY SID)

And a more difficult example, which need some explanation, for which I don't have time atm:

Give the book (ISBN and title) that is most popular in 2008, i.e., which is borrowed most often in 2008.

SELECT X.ISBN, X.title, X.loans
FROM (SELECT Book.ISBN, Book.title, count(Loan.dateTimeOut) AS loans
FROM CatalogEntry Book
LEFT JOIN BookOnShelf Copy
ON Book.bookId = Copy.bookId
LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan
ON Copy.copyId = Loan.copyId
GROUP BY Book.title) X
HAVING loans >= ALL (SELECT count(Loan.dateTimeOut) AS loans
FROM CatalogEntry Book
LEFT JOIN BookOnShelf Copy
ON Book.bookId = Copy.bookId
LEFT JOIN (SELECT * FROM Loan WHERE YEAR(Loan.dateTimeOut) = 2008) Loan
ON Copy.copyId = Loan.copyId
GROUP BY Book.title);

Hope this helps (anyone).. :)

Regards,
Guus

@a_horse_with_no_name 2014-12-07 08:08:08

The accepted answer is not "Oracle only" - it's standard SQL (supported by many DBMS)

@na43251 2010-02-24 17:07:28

This will also take care of duplicates (return one row for each user_id):

@Mike Woodhouse 2008-09-23 20:06:29

Not being at work, I don't have Oracle to hand, but I seem to recall that Oracle allows multiple columns to be matched in an IN clause, which should at least avoid the options that use a correlated subquery, which is seldom a good idea.

Something like this, perhaps (can't remember if the column list should be parenthesised or not):

SELECT *
FROM MyTable
WHERE (User, Date) IN
( SELECT User, MAX(Date) FROM MyTable GROUP BY User)

@Dave Costa 2008-09-23 15:18:24

SELECT userid, MAX(value) KEEP (DENSE_RANK FIRST ORDER BY date DESC)
FROM table
GROUP BY userid

@Derek Mahar 2011-04-15 23:16:26

In my tests using a table having a large number of rows, this solution took about twice as long as that in the accepted answer.

@Rob van Wijk 2012-04-09 07:32:32

Show your test, please

@tamersalama 2012-09-12 01:02:07

I confirm it's much faster than other solutions

@Used_By_Already 2014-08-07 07:03:23

trouble is it does not return the full record

@Dave Costa 2014-08-07 19:54:33

@user2067753 No, it doesn't return the full record. You can use the same MAX()..KEEP.. expression on multiple columns, so you can select all the columns you need. But it is inconvenient if you want a large number of columns and would prefer to use SELECT *.

@KyleLanser 2008-09-23 15:17:59

First try I misread the question, following the top answer, here is a complete example with correct results: