How long do you plan on your web servers to last? Lets say you want to offer dedicated servers and the servers you want to offer are about $2000 Obviously you would want to plan to have the server paid off before it becomes so obsolete nobody wants it. So what do you think that time period is? 3 months, 6 months, 1 year?

I think you can probably get away with making them last about 2 or 3 years - if it's a top spec one to start with, because then when the current user wants a better one, you can offer it out as a cheap low-mid spec server, there are always people wanting cheap dedicated servers as they have outgrown shared hosting but don't need a top spec server.

Its interesting you mention "obsolete" - its the software which drives the hardware and I just don't see anything coming around the corner which will demand servers way above whats available now. Sure - if you want a single server to run SQL, DNS, mail services plus web as some small 1man bands do then fair enough - but most companies have a dedicated server for each "service" and the customers webservers don't do much.. :-)

Provided the servers are in an environmentally controlled room the bits that will break are the moving parts - disks, fans etc. I've never had a cpu or mobo go. So bearing in mind IBM do a 5 year warranty on the drives we use that puts a servers lifespan at what ? 5 years ? Certainly its not 6 months

I'll agree with most of that, but there does become a point where a server is that slow compared to newer servers that it becomes a waste of rackspace e.g. We run our services in clusters of servers with separate services running on each server/cluster e.g. mail, dns, web, mysql and eventually there will come a point where we could end up with 10 lower spec mail servers in 2U cases that could be replaced with 3 or 4 thus reducing the wasted rack space by quite a lot - Just another angle to consider.

Oh an incidentally I wouldn't call Jumpline a one man band - and they run mysql, mail, web services from the same server for their customers.

I am genuinely a little surprised - I've never found CPU to be an issue, in everything I've seen its always disk capacity that prompts an additional server. Then again, because we only use SCSI the capacity of our boxes is far lower than say an IDE based machine. Makes me laugh - £500 for 75GB SCSI or £200 for 160GB IDE :-) so since our boxes are RAID1 thats a big "ouch"

That prompts a different question I guess - how many customers do you have per server. Do you keep adding customers to a box until the load reaches your cutoff point, or do you have a policy of say, no more than 200 per box ?

I think perhaps there are different markets. I can understand why the cheaper end of hosting would keep on adding users to a box until it was "full" either in terms of disk or cpu/ram usage - these guys need to maximise the little profit on each deal by large quantities. So for them, spec of a box would be an issue. The companies who offer performance hosting will only allow x no of users on a box and so would not hit the cpu/ram limit, thus providing a safety margin which will futureproof a machine a little.

I've never heard of Jumpline, I apologise in advance if my comment seemed rude, I was just making the comparison that people starting out (aka 1man bands) tend to only have a single server, but as they grow they split services over many servers to spread load, increase resilience, get out of the "eggs in 1 basket" syndrome.

With regards to your statement over users / load per box, we generally have around 50 accounts per machine - bear in mind, some of these are reseller accounts though, and the number I stated doesn't include reseller created accounts. We try to ensure the serverload remains around the 0.50 mark.

Fair dues, I can understand your points, we split our services over multiple servers rather than on a single server, so I agree with you. For us once a server gets to around 1.0-1.5 (We're talking Dual CPU servers here) on the loads on average all day then its time for a new server in the cluster.

Personally I can't see us running out of disk space on our web servers, with 240Gb RAID5 array there is plenty of space to go around.

Jumpline are a large US web host, they host over 20,000 accounts (Most likely a lot more now, that's an old figure from some time back). Also as far as I know their are a great deal of large-ish web hosts out there that still use the single server model where customers on a particular server use mysql, mail, web services from that server, the reason being that they use CPanel and it pretty much forces them to do so.

[edit]
One thing I have to add about server loads, a common misconception is that a serve rload of 1.0 on a single CPU machine is 100% utilisation of the CPU etc. - it's not, it's more like 50% CPU. Allt things being predictable and equal a load of 2.0 on a server would be 75% usage roughly, worked from (Serverload / Serverload - 1) * 100
[/edit]

We hit our "too many eggs in 1 basket" quantity well before server load. On average, by the time we move onto a new server the server load is only about 0.2 - 0.5% !!!

Then again, we run livestats which is continually looking at logfiles, a constant trickle of almost-nothing cpu load is far better than the stats packages which have to grind your logfiles every time you want a refresh.

240Gb raid5 eh, I assume you're using ide disks for that. I've just built a dedicated SQL Server machine - 2 x 18Gb in raid1 and 3 x 36Gb in raid5. Not even 100Gb and thats in a 2U chassis that will take 6 disks max. Our webservers are all dual 36Gb in RAID1, 1U chassis. Generally we'll have about 20Gb free on a server by the time we move onto the next. Either you must be giving away shed loads of space (which people *are* using) or you have a lot more sites per box cos 240Gb for a webserver is *huge*

Be interested to see how you've found IDE RAID (assuming you are using IDE) and what the comparitive costs are based on a like for like basis eg ide hot swap enclosures and a caching raid card. Can't really compare an onboard promise chipset

Matt, load of 1.0 is not 100% CPU utilisation, just watch the CPU % compared to the load and you'll see that.

Yes for web servers we use IDE RAID, the Adaptec 2400A card (We alsa have a 3Ware 6x00 - can't remember exact model), and it's very good, perfect for web servers, we use 4 x 60Gb IBM drives, the reason being that it was only marginly more expensive than using 40Gb drives and I'm all for getting value for money

We're just starting to spec a new DB server and that will have SCSI raid simply because DBs can do a lot more writes then you generally get on a web server so SCSI RAID is the way to go (It has faster write speeds). We'll be using an Adaptec card (not sure what to go for yet) with 4 x 36Gb Cheetah drives in RAID 5 configuration with an 18 Gb IBM in for backing up and local configs too and running the swap space from. As for the promise cards - don't get me started on them.

If Adaptec is your thing, the 2100S or 2110S is a suitable card for SCSI. This is single channel, most SCSI 1/2U servers only have a single channel backplane for the drives so there is no point going upto a 3xxxS series.

If write speed is important to you remember RAID5 is (in comparitive terms) rubbish at writes. RAID0+1 is the fastest but you need a lot of drives to make it worthwhile

A big improvement in bulk speed can be made by placing the non-random IO on separate disks to the database - for SQL server this means the transaction logs, for Oracle the redo logs. Hence we have raid1 and raid5 in the sql server machine for optimized throughput and resilience.

Avoid the 15k rpm drives I've been told - there are heat issues in 1/2u chassis environments. We only use the IBM 10k range. Same goes for athlon vs intel chips.

For backup we have tonnes of NAS which is RAID5..all the machines (win2k/unix) can connect to this which is quite neat. It supports quotas and authentication so dedicated server customers have their own area and quota.

Promise..took me a day of faffing trying to get redhat7.2 SMP kernel running with a promise IDE raid. Gave up, but luckily promise emailed me the SMP drivers which were not available anywhere ! All good fun....

Matt - these server loads you are talking about - do you mean the load averages on a machine displayed by something like the top utility ?

The load average tells you on average how many processes are waiting for CPU attention - now this doesn't mean that the CPU is 100% utilised, a process could be locked waiting for a semaphore the be relased etc. I've seen a server running loads of 200 and still serve web pages in fractions of a second, so the load average doesn't always relate to the CPU utilisation and can be effected by memory more than anything.