I'm trying to log the price of each item every day. In the first one I'd have to add a new column every day, so have moved that into a separate table, is there anywhere to go from there or is that as far as I can (or need to) for such a small database?

I ask because in my second table it will end up being storing a new row for each item every day which could get quite large.
e.g.

Replies To: Simple Database design

Re: Simple Database design

Posted 30 August 2012 - 06:05 AM

I would make the prices table

prices
------
id (pk)
item_id (fk to item.id)
price
date

What is change in the items table?

As far as the size of the table, it shouldn't be that big a deal. You could always create a cron job/scheduled task that purges rows from the [il]prices[/il ] table on the basis of some date, like weekly, monthly, etc.

Re: Simple Database design

Will help you to visualise the database properly without having to draw diagrams, and hopefully have the consequence of speeding up your design process

More on topic, exactly like JOAT said, I think each table should have a primary key. It makes more sense if you're ever needing to link anything to anything, then you can follow the convention of having [table]_id as your foreign key, which can go off to the id field of any other table in the database.

As your database grows, and you're creating more and more database tables to cope with the amount of data you now need to store, you'll see just how much this will help.

Note I used price_date as the date field, so there's no conflict with the date reserved word. You could use something else instead, if you want. I just try to avoid clashing with reserved words. For foreign keys you should use the InnoDB engine type as well.

Re: Simple Database design

Posted 30 August 2012 - 04:29 PM

As for your concerns about speed, don't worry about it. SELECT queries are extremely quick, and with a few well-placed indexes you should be fine.

At my work, we have a table for a client that has 650000 rows of data. Selecting the entire set takes 23500ms, or 0.036ms per record. Selecting on a non-primary non-unique index that returns 27567 rows takes 1001ms, or 0.026ms per record. It's all about how you SELECT (i.e. - which filtering clauses you use and which JOIN types you use) as well as appropriate and well-placed indexes.

As another example, I have an IP->Country table in my website's database, it has just shy of 100000 rows. When a user logs in, it compares their IP address against the table - each row of the table has the countryID, the beginningIP, and endingIP. I placed a BTree index on the beginningIP and endingIP, and now a search for an IP address of a user takes about 3ms.