This is my blog, there are many others like it but this one is mine.

Mysql Query Optimization

I heard a comment from a developer the other day:

You don’t need indexes on small tables.

So I asked what the definition of a small table was. He said, anything with a few hundred rows. So I said, 2300 rows? Well….. 24000 rows? Well….. 292000 rows? That’s large. I showed him unindexed queries in his application dealing with tables that had 2300, 24000 and 292000 rows.

Avoid tablescans

When MySQL deals with a query that is unindexed, it does a full tablescan to see if each record in the table meets the criteria specified. On a small table, if the query is executed frequently, the MySQL query cache might be able to serve the query. However, on a larger table, or a table with large rows, it must read every row, check the fields, possibly create a temporary table in ram or disk, and return the results. On a small site, you might not notice it, but, on a large system, forcing tablescans on tables with even a few thousand rows will slow things down considerably:

Use the slow-query log to find potential issues

in the my.cnf file and restarting mysql will log the unindexed queries to the slowquery log.

What can be indexed?

The rule of thumb when writing indexes is to write your query in such a way that you reduce the result set as quickly as possible, with the highest cardinality possible. What does this mean?

If you are collecting data of the IP address and the Date, your query against date,ip will actually be worse than ip,date. Imagine receiving 40000 hits to your site on the same date. If you were looking for the number of hits that a particular IP had, you would search the 41 hits they have made over time, and then the 8 that they had today. If you queried by date,ip, you would search 40000 rows then would receive the 8 they had today. Each index you have, adds extra overhead and an index file should be as small as possible. IP addresses can be represented in an unsigned int which takes much less space than the varchar(15) usually used. Remember when you index a varchar field, indexing will spacepad the key to the full length. If you have a variable length field you want indexed, you might be able to figure out the significant portion of that field by finding the average length and adding a few characters for good measure and indexing fieldname(15) rather than the entire field. If a query is longer than the 15 characters, you have still created a significant reduction in the number of rows that it must check.

Cardinality refers to the uniqueness of the data. The more unique the data, the lower the chance that you’ll have thousands of records that match the first criteria. When the data is very similar, the index as built on disk will become imbalanced resulting in slower queries. Since MyISAM and InnoDB use a B-Tree index (or R-Tree if you use a spatial index), data that is similar when inserted, can create a very imbalanced tree which leads to slower lookups. An optimize table can resort and reindex the table to eliminate this, but, you can’t do that on an extremely large, active table without impacting response times.

These two queries show two different issues, but, deal with the same fundamental issue. First, id is not indexed which would have at least limited the result set to 9 records rather than 2548. The status check isn’t able to use an index. On the second query, status is checked followed by traffic. There are other queries issued that check status,traffic,clicks_high. When we look at status (which should be an enum or char(1) rather than varchar(1)), we find that there are only 4 values used. By indexing on id,status and status,traffic,clicks_high, we could alter the queries as such:

Based on this, we might decide to set the key length to 22 as it is a relatively small number and allows room to grow. Personally, I would have opted to have the id be an unsigned int which would be much smaller, but, the application developer uses alphanumeric id’s which are exposed externally. With sharding, you could use the id throughout the various tables, or, you could map the text id to a numeric id internally for all of the various tables.

There are a number of possible solutions to help any SQL engine perform better. And your data set will dictate some of the things that you can do to make data access quicker.

Helping MySQL Help You

If you do select * from table where condition_a=1 and condition_b=2 in one place, and select * from table where condition_b=2 and condition_a=1, setting up a single index on condition_a,condition_b and adjusting your second query, reversing the conditions to the same order as the keys on the index will increase performance.

Limit your results

Another thing that will help considerably is using a limit clause. So many times a programmer will do: select * from table where condition_a=1 which returns 2300 rows but only the first few rows are used. A limit clause will prevent a lot of data from being fetched by MySQL and buffered waiting for the response. select * from table where condition_a=1 limit 20 would hand you the first 20 records.

Avoid reading the data file, do all your work from the Index

Additionally, if you have a table and only need three of the columns from the result, select fielda,fieldb,fieldc from table where condition_a=1 will return only the three fields. As an added boost, if the fields you are checking can be answered from the index, the query will never hit the actual data file and will be answered from the index. Many times I’ve added a field that wasn’t needed in the index, just to eliminate the lookup of the key in the index then the corresponding read of the data file.

Let MySQL do the work

MySQL reads tables, filters results, can do some calculations. Going through 40000 records to pick the best 100 is still faster in MySQL than allowing PHP to fetch 40000 rows and do calculations and sorts to come up with that 100 rows. Index, optimize, and allow MySQL to do the database work.

Summary

Making MySQL work more efficiently goes a long way towards making your database driven site work better. Adding six indexes to the system resulted in quicker response times and an increase in the transactions per second.

Previously, MySQL was generating 3.26 slow queries per second. Now we’re just beneath 2 slow queries per second and our system is processing 55 more transactions per second. There is still a bit more analysis to do to identify the slow queries that are still running and to alter the queries to reverse the inequality checks, but, even just adding indexes to a few tables has helped noticeably. Once the developer is able to make some changes to the application, I’m sure we’ll see an additional speedup.

This entry was posted
on Friday, August 28th, 2009 at 12:16 am and is filed under Scalability.
You can follow any responses to this entry through the RSS 2.0 feed.
You can skip to the end and leave a response. Pinging is currently not allowed.