I just noticed that after it is done and I try to access the website it is very slow. After I take a look at the free -mh I notice that the server is now swapping when it otherwise wasn't before the mysqldump.

What am I to do in this case? Just restart the server every time I backup? That doesn't seem very effective.

Are you including the "--quick" option? This prevents mysqldump from retrieving large tables in a single query. This could be what is forcing your server to swap. It makes mysqldump get the large tables row by row.

Is it possible that it's also the "max_allowed_packet" part as well? Is there a way to force it to not use swapping?
–
Daniel FischerFeb 4 '10 at 2:03

You can eliminate the max_allowed_packet, or lower the value. We use because of the '-e' option for extended inserts in combination with some large data rows we don't want the dump to crash because of a large row. If your data row size is small and fixed, you can lower that value to close to the max size of the row. You can eliminate the '-e' option as well. That may lower memory usage, but will result in larger dump files. Swapping is a function of the OS, and the memory needed by running processes. There is nothing you can set wrt mysqldump to eliminate swapping from ever happening.
–
CraigFeb 4 '10 at 22:33

how large is "large"? is a 100MB table (in sql dump) large or are we talking about GB scale?
–
cherouvimMay 12 '10 at 9:59