Tuesday, September 15, 2009

Shlomi's trick reminds me somewhat of the trick I came across little over a year ago to caclulate percentiles. At that time, several people pointed out to me too that using user-defined variables in this way can be unreliable.

The problem with user-defined variables

So what is the problem exaclty? Well, whenever a query assigns to a variable, and that same variable is read in another part of the query, you're on thin ice. That's because the result of the read is likely to differ depending on whether the assignment took place before or after the read. Not surprising when you think about it - the whole point of variable assignment is to change its value, which by definition causes a different result when subsequently reading the variable (unless you assigned the already assigned value of course, duh...).

Now watch that previous statement clearly - the word subsequently is all-important.

See, that's the problem. The semantics of a SQL SELECT statement is to obtain a (tabular) resultset - not specifying an algorithm to construct that resultset. It is the job of the RDBMS to figure out an algorithm and thus, you can't be sure in what order individual expressions (including variable evaluation and assignment) are executed.

The MySQL manual states it like this:

The order of evaluation for user variables is undefined and may change based on the elements contained within a given query. In SELECT @a, @a := @a+1 ..., you might think that MySQL will evaluate @a first and then do an assignment second, but changing the query (for example, by adding a GROUP BY, HAVING, or ORDER BY clause) may change the order of evaluation.

The general rule is never to assign a value to a user variable in one part of a statement and use the same variable in some other part of the same statement. You might get the results you expect, but this is not guaranteed.

So what good are these variables anyway?

On the one hand, this looks really lame: can't MySQL just figure out the correct order of doing the calulations? Well, that is one way of looking at it. But there is an equally valid reason not to do that. If the calculations would influence execution order, it would drastically lessen the number of ways that are available to optimize the statement.

This begs the question: Why is it possible at all to assign values to the user-defined variables? The answer is quite simple: you can use it to pass values between statetments. My hunch is the variables were created in the olden days to overcome some limitations resulting from the lack of support for subqueries. Having variables at least enables you to execute a query and assign the result temporarily for use in a subsequent statement. For example, to find the student with the highest score, you can do:

There is nothing wrong with this approach - problems start arising only when reading and writing the same variable in one and the same statement.

Another way - serializing the set with GROUP_CONCAT

Anyway, the percentile post I just linked to contains another solution for that problem that relies on GROUP_CONCAT. It turns out we can use the same trick here.

(Some people may like to point out that using GROUP_CONCAT is not without issues either, because it may truncate the list in case the pre-assigned string buffer is not large enough. I wrote about dealing with that limitation in several places and I remain recommending to set the group_concat_max_len server variable to the value set for the max_packet_size server variable like so:

SET @@group_concat_max_len := @@max_allowed_packet;

)

The best way to understand how it works is to think of the problem in a few steps. First, we make an ordered list of all the values we want to rank. We can do this with GROUP_CONCAT like this:

Now that we have this list, we can use the FIND_IN_SET function to look up the position of any particlar value contained in the list. Because the list is ordered in descending order (due to the ORDER BY ... DESC), and contains only unique values (due to the DISTINCT), this position is in fact the rank number. For example, if we want to know the rank of all scores with the value 92, we can do:

(If you're wondering how it's possible that we can pass the integer 92 as first argument for FIND_IN_SET: the function expects string arguments, and automatically converts whichever non-string typed value we pass to a string. In the case of the integer 92, it is silently converted to the string '92')

Of course, we are't really interested in looking up ranks for individual numbers one at a time; rather, we'd like to combine this with a query on the scores table that does it for us. Likewise, we don't really want to manually supply the list of values as a string constant, we want to substitute that with the query we wrote to generate that list.So, we get:

(In case you're wondering why the results are different, this is because the result set for Shlomi's solution is necessarily ordered by ascending rank (or descending amount - same difference. To obtain the identical result, you need to add an ORDER BY clause to my query. But since the point was to calculate the ranks, I didn't bother. Of course, adding an ORDER BY could slow things down even more.)

Quite a bit slower, bummer. But at leastt we can't run into nasties with the user variables anymore. For this data set, I get about the same performance with the CROSS JOIN, but I should warn that I did not do a real benchmark.

Conclusion

Don't fall into the trap of reading and writing the same user-defined variable in the same statement. Although it seems like a great device and can give you very good performance, you cannot really control the order of reads and writes. Even if you can, you must check it again whenever you have reason to believe the query will be solved differently by the server. This is of course the case whenever you upgrade the server. But also seemingly harmless changes like adding an index to a table may change the order of execution.

Almost all cases where people want to read and write to the same user variables within the same query, they are dealing with a kind of serialization problem. They are trying to maintain state in a variable in order to use it across rows. In many cases, the right way to do that is to use a self-join. But this may not always be feasible, as pointed out in Shlomi's original post. For example, rewriting the payment rank query using a self join is not going to make you happy.

Often, there is a way out. You can use GROUP_CONCAT to serialize a set of rows. Granted, you need at least one pass for that, and another one to do something useful with the result, but this still a lot better than dealing with semi-cartesian self join issues.

43 comments:

Oooh! That's the problem I had with user variables. I had written a query that used user variables to do a version of limit by, but for a calculated value rather than number of rows. It seemed to work great. I entrhusastically benchmarked it against a big table and it stopped working correctly. This is why.

Great work!Indeed, after being pointed out at the issue with user variables, I realized I was quite aware of it a few weeks ago, doing my SQL charting code; which made me write a lot of sub-sub-sub queries to manage the order.

However, I've noticed a place for improvement in your solution: please note that you re-evaluate the GROUP_CONCAT per row; this can be evaluated just once for the entire query:[pre]SELECT payment_id, amount, FIND_IN_SET(amount, @values_set) as rank FROM sakila.payment, (SELECT @values_set := GROUP_CONCAT( DISTINCT amount ORDER BY amount DESC) FROM sakila.payment) AS sel1;[/pre]

I've found this to reduce query time from approx. 0.10-0.11 seconds to 0.5-0.8 seconds, which is up to 50% time savings.I'm surprised (for the better) that FIND_IN_SET works so quickly. I'll remember that.

@water outbreaks: you aren't the first one that gets bitten. It is actually pretty counterintuitive, and I wish we'd get a warning from MySQL. Anyway, now you know why.

@Shlomi: thanks! Well, thank you for the inspiration. It turns out once can hack up quite a good deal of these analyticish queries.

Anyway, nice you found out that the JOIN is indeed faster. Perhaps it will be the same in 5.4.

I would like to point out one thing concerning your solution. I know that in this case, it is probably safe to do the assignement and the reading in the same query. But if you really want to use the user variable, and be on the safe side, you really should use two statements.

Now, the nice thing is, you do not have to suffer the cost of an extra rountrip, because you can batch the statements. It works like this:

The JOIN is faster because the subquery needs to be recalculated over and over. As I understand it, this is perfectly correct; so I'm not sure 5.4 or 6.0 should or will change this. I may be wrong on this.

the subquery is scalar and in no way dependent upon the outer query. The optimizer does not have to be particularly smart to deduce that the result simply cannot change. It is kinda like a deterministic function with static arguments. It should be evaluated just once, and the cached result should be recycled.

I thought MySQL 5.4 was introducing more clever subquery handling, and I was hoping this was on the menu. I could be wrong about that, but I am pretty sure there is no logical reason why the subquery should evaluate the query for each row of the outer query.

Roland:How would you go about ranking 5 different fields simultaneously? More specifically, I want to rank sales by rep across 5 different periods (last 60 days, last 30, last 14, last 7, and last 1). Can this method be used to do such a thing efficiently?

Sean, please post a few lines of table data, and a few lines of desired output. I am having trouble understanding what you mean exactly, but perhaps I can give it a go if you give me a concrete example. TIA, roland.

The idea is to see which reps rank highest in sales ("amount") over different time windows. If a rep with a high 60 day rank has a low 14 day rank, they are "slipping", whereas a low 60 day rank and a high 14 day rank points to a "rising star".

I am currently calculating the sales amounts for each window in a possibly ham-fisted way, like so:

[pre]SELECT user_id, SUM(CASE WHEN created_at between date_sub(curdate(),interval 60 day) and curdate() THEN 1 ELSE 0 END * amount) AS amount_L60D, SUM(CASE WHEN created_at between date_sub(curdate(),interval 30 day) and curdate() THEN 1 ELSE 0 END * amount) AS amount_L30D, SUM(CASE WHEN created_at between date_sub(curdate(),interval 14 day) and curdate() THEN 1 ELSE 0 END * amount) AS amount_L14D, SUM(CASE WHEN created_at between date_sub(curdate(),interval 7 day) and curdate() THEN 1 ELSE 0 END * amount) AS amount_L7D, SUM(CASE WHEN created_at between date_sub(curdate(),interval 1 day) and curdate() THEN 1 ELSE 0 END * amount) AS amount_L1DFROM transactionsWHERE created_at between date_sub(curdate(),interval 60 day) and curdate()GROUP BY user_id[/pre]

I can stuff the result into a temp table and then run your ranking code (iteratively) over it, but would love a perspective on a better way to do it. Analytical queries like this seem to be your power alley, after all.

However, my team wants the Rank to work more like it does in Golf or the Olympics, where you actually skip rank values for people who are tied. In your version above, two folks are tied with Rank=3 and another person follows with Rank=4.What we need are for those 2 people to be tied with Rank=3 and the next person would have Rank=5. (There would be no Rank = 4).Make sense?I'm thinking the only way to pull this off might be those user variables...Any thoughts?

My solution outperfom all of yours : I'm using a variables to compare with the previous score in the dataset and than increase the rank or not (in case of same score)

SSET @rank=0;SET @previous_score=0;SELECT CASE WHEN @previous_score<>score THEN @rank:=@rank+1 END AS temp1, CASE WHEN @previous_score<>score THEN @previous_score:=score ELSE @previous_score:=score END AS temp2,

thanks for sharing this. If you read the first part of the article, the whole point of this article was to develop a method that does not rely on user-defined variables.

I am well aware of the performance gains that can be achieved if you do use user-defined variables, and I have written several articles on this blog that do use this device. However, as I outlined in this article, and as is also documented in the MySQL Reference Manual, user-defined variables are not reliable if you read and assign them within the same statement.

Thank you Roland for your answer.My bad, I didn't read the hole article...I did some testing and the results I get are valid.Do you think that this method isn't reliable because MySQL documentation state it ? Or do you have a proof that it is really not reliable?

the problem with user-defined variables is that there is no way to explicitly control the order in which the expressions that change the values of the user-defined variables are executed.

In your particular example, you define an explicit ORDER BY clause, and you may think this takes care of the problem. However, the ORDER BY clause only requires that the result is returned in order, not that the actual operations are executed in order too. If you add/drop indexes that change the access pattern, you may find unexpected results.

Along the same lines... Here's my problem you may be able to help me with...

The premise [I manage a web site http://SaveOregonWrestling.org, I'm adding a page (Fight Ladder) to list donors who specify their donation for a specific weight class... NCAA wrestling has 10 weight classes... - I want the top 3 donors listed in each weight class, ranked by their donation amount.] I am trying to implement a MySql query on a single table:

table = FightLaddercolumns = ID, PublishName, WeightClass, Total

Below is the current PHP code I'm using which works as long as there's only 30 rows in the table. The moment I add row 31, the results are wack. e.g., a donor who donated in weight class 149 gets moved up to weight class 141 in the list!! : ( I also get a stream of "MySql WARNING error messages about moving to the next row", unless I add the @ in front of my var code, e.g. $champ1a = @mysql_result($result, 0,0);

Good day sir, I have a problem regarding my ranking script, I already apply your tutorial but unfortunately if there is a tie with the average it will look to other columns to compare for tie breaker, Thank you very much in advance

I implemented this strategy for ranking today and it works great! Unfortunately, once I tried it with our Live dataset I noticed some warnings during execution. As mentioned by yourself and Shlomi there is a memory limit, and I seem to be hitting that limit even after setting group_concat_max_len to max_allowed_packets, which is 16M.

Warning | 1260 | 9 line(s) were cut by GROUP_CONCAT()

It makes sense this would happen because we are creating a giant string in memory that is very dependent on the size of the dataset. I'm afraid this solution may not be very scalable :/ As a gauge, our dataset was a little over 250,000.

The solution we ended up with uses a rownum technique. The query takes 1.5-2 seconds on our dataset of 250,000.

@Keith, yes, when you're in a situation where you can use this trick, then that is the fastest solution. But you always have to be aware of this documented limitation:

"As a general rule, you should never assign a value to a user variable and read the value within the same statement. You might get the results you expect, but this is not guaranteed. The order of evaluation for expressions involving user variables is undefined and may change based on the elements contained within a given statement; in addition, this order is not guaranteed to be the same between releases of the MySQL Server. In SELECT @a, @a:=@a+1, ..., you might think that MySQL will evaluate @a first and then do an assignment second. However, changing the statement (for example, by adding a GROUP BY, HAVING, or ORDER BY clause) may cause MySQL to select an execution plan with a different order of evaluation."

http://dev.mysql.com/doc/refman/5.5/en/user-variables.html

In my experience, the ranking result can become unstable after adding an index, joining to other tables, adding an order by or adding a specific group by clause. Technically it's possible that a server upgrade could change the results as well.

I cannot get the INNER JOIN to work as it tells me I have a mysterious syntax error at the INNER JOIN. I can, however, set the @rank variable before the ranking query. Any issues with doing it this way?

I'm trying to wrap my head around this. I'm not sure if I'm misunderstanding it, or a few different problems are getting mixed up. I think the problem that the MySQL docs refer to is not the same as the problem that you run into with ranking.

@a:=@a+1 is a single statement and not vulnerable to execution order changes. However, if you have that, and then do an if() comparison, you might not get what you expect, and that is what the MySQL doc is referring to.

The problem that you run into when ranking, is that the field selection (SELECT @a:=@a+1) does not need to happen row-by-row after it's been ordered. In one example I have, if I order a specific column it works correctly, but if I try to use a function it fails (probably doing the calculation before the ordering).

As Roland points out, if you use a subquery for the order statement, and do the count in an outer query, I think you should be safe. So far I haven't been able to see a problem, but I would welcome further testing or info about what might mess it up.

Search This Blog

About Me

I'm @rolandbouman, a Web- and BI Developer and Information Analyst. I have worked for MySQL AB, Sun Microsystems and I'm currently working as a software engineer for Pentaho (a Hitachi data systems company).

Together with Jos van Dongen I wrote a book called "Pentaho Solutions" (Wiley, ISBN: 978-0-470-48432-6, 630+ pages). This book is intended for people that want to get started with Business Intelligence and provides lots of practical examples to work with the open source Pentaho Business Intelligence Suite.

Together with Matt Casters and Jos van Dongen, I authored another book for Wiley called "Pentaho Kettle Solutions" (750+ pages, Wiley, ISBN: 978-0-470-63517-9). This book is more specialized and focuses on Pentaho data integration (Kettle) and ETL.