Chapter 46. Disaster Prevention and Recovery

Table of Contents

Database Backups
Using myisamchk for Table Maintenance and Crash Recovery
myisamchk Invocation Syntax
General Options for myisamchk
Check Options for myisamchk
Repair Options for myisamchk
Other Options for myisamchk
myisamchk Memory Usage
Using myisamchk for Crash Recovery
How to Repair Tables
Table Optimization
Setting Up a Table Maintenance Regimen
Getting Information About a Table

This section discusses how to make database backups and how to perform table maintenance. The syntax of the SQL statements described here is given in Database Administration.

Database Backups

Because MySQL tables are stored as files, it is easy to do a backup. To get a consistent backup, do a LOCK TABLES on the relevant tables followed by FLUSH TABLES for the tables. See LOCK TABLES. See FLUSH. You only need a read lock; this allows other threads to continue to query the tables while you are making a copy of the files in the database directory. The FLUSH TABLE is needed to ensure that the all active index pages is written to disk before you start the backup.

Starting from 3.23.56 and 4.0.12 BACKUP TABLE will not allow you to overwrite existing files as this would be a security risk.

If you want to make an SQL level backup of a table, you can use SELECT INTO OUTFILE or BACKUP TABLE. See SELECT. See BACKUP TABLE.

Another way to back up a database is to use the mysqldump program or the mysqlhotcopy script. See mysqldump. See mysqlhotcopy.

  1. Do a full backup of your database:

     shell> mysqldump --tab=/path/to/some/dir --opt db_name
     

    or:

     shell> mysqlhotcopy db_name /path/to/some/dir
     

    You can also simply copy all table files (*.frm, *.MYD, and *.MYI files) as long as the server isn't updating anything. The script mysqlhotcopy does use this method. (But note that these methods will not work if your database contains InnoDB tables. InnoDB does not store table contents in database directories, and mysqlhotcopy works only for MyISAM and ISAM tables.)

  2. Stop mysqld if it's running, then start it with the --log-bin[=file_name] option. See Binary log. The binary log files provide you with the information you need to replicate changes to the database that are made subsequent to the point at which you executed mysqldump.

If your MySQL server is a slave, whatever backup method you choose, when you backup your slave's data, you should also backup the master.info and relay-log.info files which are always needed to resume replication after you restore the slave's data. If your slave is subject to replicating LOAD DATA INFILE commands, you should also backup the SQL_LOAD-* files which may exist in the directory specified by the --slave-load-tmpdir option. (This location defaults to the value of the tmpdir variable if not specified.) The slave will need these files to resume replication of any interrupted LOAD DATA INFILE operations.

If you have to restore something, try to recover your tables using REPAIR TABLE or myisamchk -r first. That should work in 99.9% of all cases. If myisamchk fails, try the following procedure (this will only work if you have started MySQL with --log-bin, see Binary log):

  1. Restore the original mysqldump backup, or binary backup.

  2. Execute the following command to re-run the updates in the binary log:

     shell> mysqlbinlog hostname-bin.[0-9]* | mysql
     

    In your case you may want to re-run only certain binlogs, from certain positions (usually you want to re-run all binlogs from the date of the restored backup, possibly excepted some wrong queries). See mysqlbinlog for more information on the mysqlbinlog utility and how to use it.

    If you are using the update log (which is removed in MySQL 5.0.0) you can execute the content of the update log like this:

     shell> ls -1 -t -r hostname.[0-9]* | xargs cat | mysql
     

ls is used to get all the update log files in the right order.

You can also do selective backups with SELECT * INTO OUTFILE 'file_name' FROM tbl_name and restore with LOAD DATA INFILE 'file_name' REPLACE ... To avoid duplicate records, you need a PRIMARY KEY or a UNIQUE key in the table. The REPLACE keyword causes old records to be replaced with new ones when a new record duplicates an old record on a unique key value.

If you get performance problems in making backups on your system, you can solve this by setting up replication and do the backups on the slave instead of on the master. See Replication Intro.

If you are using a Veritas filesystem, you can do:

  1. From a client (or Perl), execute: FLUSH TABLES WITH READ LOCK.

  2. From another shell, execute: mount vxfs snapshot.

  3. From the first client, execute: UNLOCK TABLES.

  4. Copy files from snapshot.

  5. Unmount snapshot.