This takes about 60 seconds to execute. The order by uses two columns that are each a varchar(50). If I change the order by column to pID, the select takes about 3 seconds to execute. Any suggestions on optimization for this? Thanks.

variable value description
Bytes_received 137 Bytes sent from the client to the server
Bytes_sent 4469 Bytes sent from the server to the client
Com_select 1 Number of SELECT statements that have been executed
Handler_commit 1 Number of internal commit statements
Handler_read_first 1 Number of times the first entry was read from an index. A high value indicates that full index scans were done
Handler_read_key 152038 Number of requests to read a row based on a key
Handler_read_next 152035 Number of index columns read with a range constraint or an index scan
Innodb_buffer_pool_pages_data* 501 The number of pages containing data - both dirty or clean
Innodb_buffer_pool_pages_misc# 10 The number of buffer pool pages allocated for administrative overhead
Innodb_buffer_pool_read_ahead_rnd# 41 The number of random reads by Innodb. The query scanned a large portion of a table in random order
Innodb_buffer_pool_read_ahead_seq# 969 The number of sequential reads by Innodb.
Innodb_buffer_pool_read_requests# 898039 The number of logical read requests Innodb has done
Innodb_buffer_pool_reads# 24807 The number of logical reads that were not satisfied from the buffer pool and were read from the disk
Innodb_data_read# 586055680 The amount of data read by Innodb
Innodb_data_reads# 35770 The total number of data reads by Innodb
Innodb_rows_read# 152157 The number of rows read from Innodb tables
Last_query_cost* 182953 The total cost of this query as computed by the query optimizer
Questions 1 Number of statements executed by the server
Select_scan 1 Number of full table scans of the first table in the query
Slow_queries 1 This query took more than the value specified in long_query_time. It probably requires optimization
Table_locks_immediate 2 The number of requests for table locks that could be granted immediately

Comment

only 122, but the result range of rows that this query can be used for range from 2 to 11178.

I just reran the query using the largest Category and suddenly some of my queries are about 1 second each. Others are 3 to 6 seconds and some occasionally are still about 60 seconds but most are acceptable. I'm really confused now. I have run this many times now with no consistency.

Yesterday while I was trying to figure this out I added an index for ArtistLN, ArtistFN to the artdistinct table but the performance didn't change. Since yesterday, the only other change I have made is to reboot.

I don't know if this makes a difference, but the tables that are being used in the query are largely static. They only change when I make an occasional update to the tables when new art is available. During normal use they don't change.

Comment

I am thinking you need different indexes. The output you pasted is hard for me to read because some numbers are joined together in EXPLAIN, but I think you might try an index on (CategoryID,ArtistLN,ArtistFN).

This general type of query looks like a suboptimal design anyway, and you might consider other ways. Looks like a typical pagination query -- search our blog for "pagination" for some ideas.

Comment

It is a pagination query so I will take a look your blog for some ideas.

I thought about using an index for (CategoryID,ArtistLN,ArtistFN but since CategoryID is in a different table than ArtistLN, ArtistFN, I didn't think that this could be done on a join. Am I wrong on this?

I also thought about de-normalizing this into a single table but I in addition to this query for Art Style, I also have similar queries and paginations for Primary Color, Subjects and Artists so de-normalization is not an option.