Granted, you have 11 million rows in the table, but on average, how many entries are actually being updated in any given day? OR, is the 11 million rows, the number that ARE being updated. If 11 million updates per day, I would go with a summary table.
–
DRappDec 14 '10 at 16:15

engine is myisam. summary table won't work for this - data is changing and queries could be anything (by the second)
–
andersonbd1Dec 16 '10 at 17:25

Counting 1,222,222.2 rows per second. Perhaps I'm just a poor, disadvantaged webpage scripter, but I'd take those results in a heartbeat.
–
bob-the-destroyerDec 17 '10 at 5:24

1

Can you share some context for this query to us? It seems like you are looking for too big an improvement to solve via simple "tuning" techniques. You need to exploit the table's usage patterns in order to go sub-second with this.
–
RonnisDec 18 '10 at 12:58

10 Answers
10

If mysql has to count 11M rows, there really isn't much of a way to speed up a simple count. At least not to get it to a sub 1 second speed. You should rethink how you do your count. A few ideas:

Add an auto increment field to the table. It looks you wouldn't delete from the table, so you can use simple math to find the record count. Select the min auto increment number for the initial earlier date and the max for the latter date and subtract one from the other to get the record count. For example:

Create another table summarizing the record count for each day. Then you can query that table for the total records. There would only be 365 records for each year. If you need to get down to more fine grained times, query the summary table for full days and the current table for just the record count for the start and end days. Then add them all together.

If the data isn't changing, which it doesn't seem like it is, then summary tables will be easy to maintain and update. They will significantly speed things up.

+1 for summary table suggestion. In this case, you may choose to denormalize some smaller amount of information that can be used to generate the number you need. Just be very careful on proper maintenance of the redundant data
–
RandyDec 12 '10 at 3:17

summary is ok, auto increment is not - it is not guaranteed to be consecutive
–
UnreasonDec 14 '10 at 16:41

sorry it's taken me a while to respond - I've been out of town. Those are good suggestions, but won't work for me as the data is updated and the queries could be any data to any other data (by the second). I'm currently investigating loading the index into memory.
–
andersonbd1Dec 16 '10 at 16:06

summary tables only work if the data set itself and desired set of fields to calculate against never changes. And as everyone knows, requirements always change.
–
bob-the-destroyerDec 17 '10 at 5:27

1

@anderson can you update the summaries when you update the base data? The 'by the second' queries are no problem if you can keep the summaries in sync (see my answer below)
–
user533832Dec 18 '10 at 14:48

Another thing you can do if you don't mind changing the structure of the table, is to use the timestamp of the date in 'int' instead of 'datetime' format, and it might be even faster.
If you decide to do so, the query will be

If the historical data is not volatile, create a summary table. There are various approaches, the one to choose will depend on how your table is updated, and how often.

For example, assuming old data is rarely/never changed, but recent data is, create a monthly summary table, populated for the previous month at the end of each month (eg insert January's count at the end of February). Once you have your summary table, you can add up the full months and the part months at the beginning and end of the range:

MySQL doesn't "optimize" count(*) queries in InnoDB because of versioning. Every item in the index has to be iterated over and checked to make sure that the version is correct for display (e.g., not an open commit). Since any of your data can be modified across the database, ranged selects and caching won't work. However, you possibly can get by using triggers. There are two methods to this madness.

This first method risks slowing down your transactions since none of them can truly run in parallel: use after insert and after delete triggers to increment / decrement a counter table. Second trick: use those insert / delete triggers to call a stored procedure which feeds into an external program which similarly adjusts values up and down, or acts upon a non-transactional table. Beware that in the event of a rollback, this will result in inaccurate numbers.

If you need to return the total table's row count, then there is an alternative to the
SELECT COUNT(*) statement which you can use. SELECT COUNT(*) makes a full table scan to return the total table's row count, so it can take a long time. You can use the sysindexes system table instead in this case. There is a ROWS column in the sysindexes table. This column contains the total row count for each table in your database. So, you can use the following select statement instead of SELECT COUNT(*):

@zedo How would this handle where date_updated > '2009-10-11 15:33:22' even on SQL Server?
–
user533832Dec 13 '10 at 14:20

I would just add another AND to the WHERE clause. However, I am not certain that the timestamp in sysindexes would coincide with the actual table you are looking at.
–
THE DOCTORDec 13 '10 at 21:34

@zedo how could it? There is only 1 row in sysindexes for each index. Also the value of the rows field is an estimate
–
user533832Dec 13 '10 at 21:49

The value of the rows field is not merely an estimate depending upon the last time STATISTICS were updated for the db. If the DBCC SHOW_STATISTICS (table_name , index_name) command indicates that it is outdated then one may run the following command to update statistics: USE <database_name> EXEC sp_updatestats
–
THE DOCTORDec 13 '10 at 22:12

@zedo it still should be considered an estimate unless you are the only user on that database - but this is an aside really as sysindexes can only give you an estimate of the total number of rows in the table, which is not what anderson wants
–
user533832Dec 14 '10 at 15:10

There are a few details I'd like you to clarify (would put into comments on the q, but it is actually easier to remove from here when you update your question).

What is the intended usage of data, insert once and get the counts many times, or your inserts and selects are approx on par?

Do you care about insert/update performance?

What is the engine used for the table? (heck you can do SHOW CREATE TABLE ...)

Do you need the counts to be exact or approximately exact (like 0.1% correct)

Can you use triggers, summary tables, change schema, change RDBMS, etc.. or just add/remove indexes?

Maybe you should explain also what is this table supposed to be? You have record_id with cardinality that matches the number of rows, so is it PK or FK or what is it? Also the cardinality of the date_updated suggests (though not necessarily correct) that it has same values for ~5,000 records on average), so what is that? - it is ok to ask a SQL tuning question with not context, but it is also nice to have some context - especially if redesigning is an option.

In the meantime, I'll suggest you to get this tuning script and check the recommendations it will give you (it's just a general tuning script - but it will inspect your data and stats).

I took a DB2 class before, and I remember the instructor mentioned about doing a count(1) when we just want to count number of rows in the table regardless the data because it is technically faster than count(*). Let me know if it makes a difference.

this has been proven to NOT be the case in Oracle. the performance is the same with count(*) an count(1)
–
RandyDec 12 '10 at 3:15

@Randy: Is it the case in this situation -> MySQL as stated in OP?
–
DanosaureDec 12 '10 at 16:23

@Danosaure - I'm not sure, just trying to add some info that I know about Oracle. I do not have a My SQL instance to do the tests myself either... :(
–
RandyDec 16 '10 at 2:23

1

actually, MySQL (ISAM, not InnoDB) is supposed to have optimizations for some Count(*) queries - like those with no conditions, or on a primary key. so this statement by limc should be inapplicable. feel free to test, though.
–
zanlokDec 17 '10 at 21:14