Column Length declaration

Column Length declaration

Dear Experts,On starting to design for a new project , I would like to know if its better to allocate double the size of the data-length anticipated to grow or to declare a length little more than what is anticipated to grow maximum. Or is it just better to give a much longer length without bothering about anything ? are there any implications of giving too long column lengths with varchar ? Int and Char are understandable though. Thank you in advance.

Error slightly on the short side. It is safer to loosen rather than tighten constraints in the future - and length is a constraint. Screen and report layout widths are based on defined column lengths. Large column widths will leave a developer bewildered about screen and report designs. Some internal limitations are based on actual and others on declared sizes like little land mines that may eventually blow up.

Like John said, assign it to the size it should be. Databases grow, Tables, grow, but columns should not. You should know how large a field is going to be when you create your database/application. For example, you a reference might be between 8 and 12 characters, so make it a varchar(12); don't make it a varchar(24) incase, one day, just maybe, you *might* have a 13 character reference.

If you're using an IDENTITY column, then choose an appropriate size still. If you know a table isn't going to have more than 10,000 records inserted into in, then you could use a smallint. If you want to really cover yourself create the column as a smallint IDENTITY(-32768,1); you'll have over 64,000 values to play with then.

Queue Joe telling me off for defining a numeric column that I have no plan to do maths on.

Please always remember to encapsulate your code in IFCode Markup. For example [code=sql] [/code].Click here to read Jeffs Guide on how to post SQL questions, and get swift and helpful answers from the community

Thanks John , Thom A and Bill .Thom, your approach seems suitable especially for systems with not-so-high memory if we look from the memory-grant perspective. Since SQL estimates the column data-occupied length to be half, and does the memory grant , what's the magnitude of impact when it knows that the actual grants required are little more because , a varchar(12) has occupancy above 6 (8 to 12) in your example ? I mean is it a costly operation that may impact performance ?