The SitePoint Forums have moved.

You can now find them here.
This forum is now closed to new posts, but you can browse existing content.
You can find out more information about the move and how to open a new account (if necessary) here.
If you get stuck you can get support by emailing forums@sitepoint.com

If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

We have databases with 25mill records to 1.2bill records, and they work pretty well.

A few things to remember:
- always select on keys
- keep the keys small
- try not to update the keys...
- never delete records...
- don't select limit x,z...
- write to a master db without indexes, replicate the data to slaves, and read from there.
- do an "explain" on every query you add to your script to make sure it's using the correct keys / doesn't go through the entire db.
- get a server with allot of ram (ram is usually the choke point, not the cpu, if you did your queries well)

The product my company provides will add about 1.5 million records per day to its main table. Today, the table contains about 1 billion records. A long transaction for us is 0.5 sec. We have a dedicated database server running MySQL with 4GB of RAM. We use the MyISAM engine.

Vali gave very good advice. Use Explain to see how efficient your database requests will be and always select records based upon an appropriate index.