On Mon, 2003-01-27 at 13:33, Josh Berkus wrote:
> Medve,
>
> > Have you got any data (ie in percentage) of around how much more CPU
> > work needed with the bigserial type in the queries?
> >
> > I have a log database with 100million records (the biggest table
> > contains 65million records) and I use bigserial data type as primary key
> > now. The primary key looks this way: YYYYMMDD1xxxxxxx where the first 8
> > numbers are the date, and the x's are the record sequence number on that
> > day. This way the records are in ascendant order. Almost all of the
> > queries contains date constraints (PK like 'YYYYMMDD%'). I'd like to
> > know if I do it in a stupid way or not. I'm not a DBA expert so every
> > idea are welcome. If you need more information about the
> > hardware/software environment, the DB structure then I'll post them.
>
> Given that structure, I'd personally create a table with a 2-column primary
> key, one column of type DATE and one SERIAL column. Alternately, if you find
> the conversion of DATE to char for output purposes really slows things down,
> one column of INT and one of SERIAL. Either way, the two columns together
> make up the primary key.
>
> I would definitely suggest avoiting the temptation to do this as a single
> column of type CHAR(). That would be vastly more costly than either
> strategy mentioned above:
>
> DATE + SERIAL (INT) = 8 bytes
Ah, cool. I thought DATE was 8 bytes. Should have RTFM, of course.
> INT + SERIAL (INT) = 8 bytes
>
> CHAR(16) = 18 bytes
--
+---------------------------------------------------------------+
| Ron Johnson, Jr. mailto:ron(dot)l(dot)johnson(at)cox(dot)net |
| Jefferson, LA USA http://members.cox.net/ron.l.johnson |
| |
| "Fear the Penguin!!" |
+---------------------------------------------------------------+