This has about 30 columns with some columns that have values from 0-200 (a,b) and some only have 5 values (0,1,2,3,4) (column c-d). There are aprox. 120k rows in the table.

To show the number of items per row I use a query for each column:

select a, count(*) FROM test group by a;
select b, count(*) FROM test group by b;
select c, count(*) FROM test group by c;
select d, count(*) FROM test group by d;
select e, count(*) FROM test group by e;

The problem with this is that it will fire 30 queries (one per column) and basically goes over the same set of data each time.

Is there a better way to do this?

I have tried with GROUP BY WITH ROLLUP but this results in a massive resultset which is slower to process than each individual query.

select 'a' as `column`, a as data, count(*)
FROM test
group by 'a', a
union
select 'b', b, count(*)
FROM test
group by 'b', b
union
select 'c', c, count(*)
FROM test
group by 'c', c
union
select 'd', d, count(*)
FROM test
group by 'd', d
union
select 'e', e, count(*)
FROM test
group by 'e', e

Don't know if it is any better but at least the planner will have a chance to optimize it.

Depending on the ecology here it might be more efficient to build a table of aggregate data once and then keep it up to date whenever this table is modified. Your aggregate data table would have one row for each (present) value, and then 30 additional columns of counts. Then you can put in triggers on the original that update the counts. Naturally that will slow down write operations on the original table, although so will adding 30 indexes.

Adding an index on a column with only 5 distinct values doesn't help since MySQL will probably ignore that index. An aggregate table sounds good but in this case I already select a subset of this table based on some columns so an aggregate table can't be done in this case.
–
NinOct 1 '12 at 7:30