We've tables with 22 million rows, and there is no bottleneck around the corner. A minimum of none that enough RAM can't fix. Generally there's very difficult good or bad. It is dependent around the character of the data, table engine, etc..

Should you revealed more information what type of data it's that you are saving, a response may well be more detailed.

My only general advice for big databases is, that I'd exceed the hardware options prior to going into replication and/or sharding (for performance reasons -- keeping a slave for backup is really a different story). You should also know your index-fu and also the apparent switches/options to be able to tune the database server.

More information, if you're able to let me know what type of data you are dealing with.