Question asked by a student: is there a way to limit a table's quote on disk? Say, limit a table to 2GB, after which it will refuse to grow? Note that the requirement is that rows are never DELETEd. The table must simply refuse to be updated once it reaches a certain size.

There is no built-in way to limit a table's quota on disk. First thing to observe is that MySQL has nothing to do with this. It is entirely up to the storage engine to provide with such functionality. The storage engine is the one to handle data storage: how table and keys are stored on disk. Just consider the difference between MyISAM's .MYD & .MYI to InnoDB's shared tablespace ibdata1 to InnoDB's file-per table .ibd files.

The only engine I know of that has a quota is the MEMORY engine: it accepts the max_heap_table_size, which limits the size of a single table in memory. Hrmmm... In memory...

Why limit?

I'm not as yet aware of the specific requirements of said company, but this is not the first time I heard this question.

The fact is: when MySQL runs out of disk space, it goes with a BOOM. It crashed ungracefully, with binary logs being out of sync, replication being out of sync. To date, and I've seen some cases, InnoDB merely crashes and manages to recover once disk space is salvaged, but I am not certain this is guaranteed to be the case. Anyone?

And, with MyISAM..., who knows?

Rule #1 of MySQL disk usage: don't run out of disk space.

Workarounds

I can think of two workarounds, none of which is pretty. The first involves triggers (actually, a few variations for this one), the second involves privileges.

Or, you could write your own UDF, e.g. get_table_file_size(fully_qualified_table_name) and be more accurate:

DELIMITER $$
DROP TRIGGER IF EXISTS logs_bi $$
CREATE TRIGGER logs_bi BEFORE INSERT ON logs
FOR EACH ROW
BEGIN
SELECT get_table_file_size('world.logs') INTO @table_size;
IF (@table_size > 25*1024) THEN
SELECT 0 FROM `logs table is full` INTO @error;
END IF;
END $$
DELIMITER ;

(Same should be done for UPDATE operations)

In both workarounds above, triggers are pre-defined. But triggers are performance-killers.

How about preventing writing to the table only when it's truly on the edge? A simple shell script, spawned by a cronjob, could do this well: get the file size of a specific table, and test if it's larger than n bytes. If not, the script exits. If the file is indeed too large, the scripts invokes the following on mysql:

So, during most of the time, there is no trigger. Only when the external script detects that table is too large, does it create a trigger. The trigger has no logic: it simply raises an error (PS, use raise in MySQL 5.5).

Privileges

Another way to work around the problem is to use security features. Instead of creating a trigger on the table, REVOKE the INSERT & UPDATE privileges from the appropriate user on that table.

This may turn out to be a difficult task, since MySQL has no notion of fine grain changes. That is, suppose we have:

There is no such grant defined for user 'webuser' on host '%.webdomain' on table 'logs'.

So this requires setting up privileges on the table level in the first place. Plus note that as long as the grants on the database level do allow for INSERTs, you cannot override it on the table level.

Other ideas?

I never actually implemented table disk quota. I'm not sure this is a viable solution; but I haven't heard all the arguments in favor as yet, so I don't want to rule this out.

Please share below if you are using other means of table size control, other than the trivial cleanup of old records.