Hello, comments inline. Regards, Karen.
>
>
> I checked up in the mean time, and it does not make a truly consistent backup of
> MyISAM - it locks all tables - yes, ALL tables - and then copies the files. Given that
> MyISAM doesn't support transactions, that means that any "transactions" (that is, sets of
> related queries) in progress will be copied in the state they were, and the backup may
> contain inconsistent data like unreferenced rows.
This however means that the resulting snapshot will be consistent. The fact of taking a
backup really cannot create more consistency than what the application has by design. If
the application inserts related rows in say two related tables without transactions, it
kind of assumes that it is prepared to handle the case when updates make it to one table
and not to the other. If it is not prepared to handle it, it means that any client crash
will create inconsistent data, not to mention the server crash.
>
>> I am not using xtrabackup but I think --single-transaction & -q
>> options may solved this issue
> --single-transaction, as you say, only affects InnoDB tables.
>
> --lock-all-tables will give you the same behaviour for MyISAM as xtrabackup, but is
> likely to be incompatible with --single-transaction.
>
> Neither of those solve your consistency issue.
>
Somebody mentioned the xtrabackup to me. How different is it from another wrapper program
with the ibbackup at the core? I will be very curious to know if there exists a product
that does NOT do the following at the basic technical level:
- FLUSH TABLES WITH READ LOCK; <-- locks all tables
- take backup
With the products of this kind, all that can be done is to accept that it does what it
does. Which does not exclude the options like reducing the downtime with say
split-mirror-like snapshots or creating a replica to experience the downtime there.
If I let my fantasy run wild about what an alternative could be, it will be something like
this:
- run a cycle for all MyISAM tables
- for each table, lock it for writes
- note the position in the binary log, record this position
- read the table into the backup
- release the lock
To recover this, do the following:
- restore the table backups
- start reading binlog from the first recorded position, record by record
- determine which table the record is a change fore
- if the position is greater than the position recorded for the table, apply the change,
otherwise don't.
The result will be the locks taken per table, which is equally bad compared to the normal
access pattern for MyISAM tables.
> The answer to "is there a way to take consistent backups of MyISAM tables without
> stopping the application" is "no, there is not". The binlog backup strategy I roughly
> outlined earlier can dramatically decrease your application's downtime, however.
>
If we think about it, a MyISAM table by definition is a table, the consistency of which is
based on whole table locks. Considering that the backup is really a request to read the
whole table, locking out everybody else, the question about backup can well be rephrased
as "is a MyISAM table a MyISAM table?" The answer to this is a firm yes :-).

Content reproduced on this site is the property of the respective copyright holders. It is not reviewed in advance by Oracle and does not necessarily represent the opinion of Oracle or any other party.