6 Answers
6

In general, the more of anything you have will reduce performance. However, 200 seems like a fairly small number. 2,000 might be a real performance hit and definitely 20,000. In general, though, you should keep your number of tables small since MySQL can handle very large number of rows in the table.

A large number of tables can make a huge difference if your application makes use of 'information_schema'. Nothing in 'information_schema' is cached. Several different ORM systems make heavy use of this.
–
ZoredacheNov 10 '09 at 23:41

What queries you're running - you aren't likely to be querying all 200 within a single query

The overall query load on the system at any given point in time

Having excess tables does mean memory & hard drive space that could be regained & used for other things. Keep in mind that denormalizing your tables increases the risk of bad data because you are getting rid of referencial integrity.

Can it? Sure. But how much depends a great deal on your app and its access patterns, and if you're using myisam or innodb, and if innodb is in file-per-table mode or not, and the size of the innodb logs. You'll need to give us more detail than that.

No - I don't think the number of tables will be the performance bottleneck in your system. After all, they are just files on your filesystem. There's nothing unusual about having hundreds of tables in a database.

Far more likely is that your queries aren't properly optimised. I would advise turning on the log-slow-queries and log-queries-not-using-indexes. When you identify slow queries, use the explain option to view the query plan for these queries to identify the places where indexes are missing.

For every table's index MySQL has its own index. Index takes memory to hold and use, and there's global limit for indexes. When there are many tables indexes can't all be in RAM, so they go to disk, which affects performance directly. Try raising this in my.cnf: key_buffer_size=256M: this is the amount of RAM put aside for holding index information.

MySQL has may settings and multiple engines each effecting tables performance in different ways.

MyISAM normally has three files per table .frm, .MYD, .MYI

INNODB in normal mode has .frm with all the data stored in the central file

INNODB in one file per table mode has .frm, .idb

table_open_cache is the number of tables that can be open at once (Default 64). This may need to be larger than the number of tables in you schema as it is related to how many connections are querying the DB. 100 connections join 3 tables could mean you have 300 tables cached plus any temporary tables. Generally more complex a schema or connections the bigger the number.

open-files-limit I tend to set this to 4x table_open_cache which should be generous rather than bothering to work out an exact value.

The operating systems file handle limit for the mysql user may also be an issue (on linux the default can often be 1024, ulimit -n to show user limits) this can cause an issue with large numbers of tables when it is less that the number mysql requires. This should be at least the same as the open-files-limit.

As with any database there are hundreds of tuning parameters that you can adjust to optimise the database for your particular schema. MySQL is worse for this than most as you can plug in extra engines if you want i.e. NDB cluster engine.