Comment

It's not a mystery. You get that error because 2007 is missing out of the primary index. It knows that because there is no entry in the primary index, this query can never succeed and gives that error.

However, it is legitimate that this primary entry may not be defined. Furthermore, it's really not a good error because you could insert a row later that would succeed.

Comment

Well, it appears that you are misinformed because this is an innodb table, although in rare cases it may only have one row in it. I still believe what I read is true: It's based on the primary key as I said before. Other unrelated queries in the slow log support that as well.

As for the rest, if these legitimate queries can't be prevented from being put on the log in the first place, then I might as well turn off the logging other than those queries that really take a long time. It's just not very useful if it's going to log good queries as bad ones.

Comment

You really have to do a test matrix to test all outcomes to be certain of something. )

A unique index (like primary key) will always give the error that you are describing.
My guess is that if you have a normalized database and you are asking for something using the primary key it _should_ be there.
And if it is not then something is wrong hence error.

But if you use a _normal_ index things are a bit different.
* With less than 2 rows in the table.
MyISAM will always report "impossible where noticed ..."
InnoDB will only report "using where".
(Which I actually think is a bug because it is also reporting 1 rows and the table is empty).

* With 2 rows and more either one is not reporting anything.

Regarding what is useful in the log file an what is not it is of course up to you do decide.

But I think more information is better than no information.
Because it is so easy to filter out things afterwards, while it is impossible to find out what happened if it wasn't logged in the first place.

Try out using mysqldumpslow on your current log file with the "impossible where noticed" entries still left. See how much it affects what you see.

Comment

My guess is that if you have a normalized database and you are asking for something using the primary key it _should_ be there.

The database is normalized, but there is no guarantee this row of data exists. The whole point of the query is to test if this data exists and react accordingly. The non-existence of this row does not indicate an error.

While it may not be up to me to say what should or should not be logged, I certainly believe there should be more granularity in the configuration file. I should be able to turn off logging of this type error.

I want the slow-query log to log problems, not grow to gigabyte size with normal queries or require filtering. As a user, I'm certainly entitled to decree something useful or not, as it's my point of view. It's current implementation is not very useful to me.