Fifteen years ago, you weren't a participant in the digital age unless you had your own homepage. Even in the late 1990s, services abounded to make personal pages easy to build and deploy—the most famous is the now-defunct GeoCities, but there were many others (remember Angelfire and Tripod?). These were the days before the "social" Web, before MySpace and Facebook. Instant messaging was in its infancy and creating an online presence required no small familiarity with HTML (though automated Web design programs did exist).

Things are certainly different now, but there's still a tremendous amount of value in controlling an actual honest-to-God website rather than relying solely on the social Web to provide your online presence. The flexibility of being able to set up and run anything at all, be it a wiki or a blog with a tipjar or a photo hosting site, is awesome. Further, the freedom to tinker with both the operating system and the Web server side of the system is an excellent learning opportunity.

The author's closet. Servers tend to multiply, like rabbits.

Lee Hutchinson

It's super-easy to open an account at a Web hosting company and start fiddling around there—two excellent Ars reader-recommended Web hosts are A Small Orange and Lithium Hosting—but where's the fun in that? If you want to set up something to learn how it works, the journey is just as important as the destination. Having a ready-made Web or application server cuts out half of the work and thus half of the journey. In this guide, we're going to walk you through everything you need to set up your own Web server, from operating system choice to specific configuration options.

Enlarge/ A snippet from LithiumHosting.com, showing its current shared hosting prices. You could do this, but wouldn't it be more fun to host your own stuff? Of course it would!

Lithium Hosting

The hardware

You'll need some hardware, and fortunately, a personal Web server doesn't require a lot of juice. You can cobble together a server out of spare parts and it will almost certainly be enough to do the job. If you're starting from scratch, consider something like an E-350-powered Foxconn NTA350. Coupled with 4GB of RAM and a 64GB SSD, you can get rolling for about $270. There are cheaper options, too, but I used just such a setup for more than a year and I can attest to its suitability.

If you're cannibalizing or cobbling, you really don't need much. We're going to be using a Linux server distro as our server operating system, so the hardware can be minimal. An old Core 2 Duo or Pentium box gathering dust in the corner should work fine. You don't need more than 1GB of RAM, and in fact 512MB would work without issue. Ten gigabytes of storage is more than you'll ever fill unless you're going to use the server for lots of other stuff as well, so a creaky old hard drive is fine. As long as you can install your Linux distro of choice on it, it will work without issue.

Faking it with a virtual machine

If you don't have hardware available or you don't want yet another computer clogging up your closet, fear not. For home use, a virtual machine works perfectly well. In fact, a VM is exactly what you'd be issued if you go with just about any hosting provider on the planet, unless you pony up some serious dollars to have your own dedicated server. Having your own physical machine is nice, but it's not always practical. Feel free to follow along at home inside a VM.

If you don't already own a desktop virtualization product of some sort (VMware Workstation for Windows, or VMware Fusion or Parallels for OS X), there are free alternatives: VMware VSphere is full-featured and rich, but it requires you to dedicate an entire computer as a virtualization host. The company's older standalone product, VMware Server, is still available but rapidly approaching its end-of-life for support. Windows 8 and Windows Server 2012 come with a built-in hypervisor, but you need to purchase the operating systems. There's also a standalone product, Hyper-V Server, but like VSphere it requires you to dedicate a whole computer to virtualization.

The least-complex, free solution is to download and install VirtualBox. That will run on an existing Windows or OS X or Linux host and will let you run a virtualized Linux server with a minimum of fuss. I won't go through the steps of downloading and installing a virtualization solution, but it's not terribly hard.

The operating system

I've already given away the operating system choice a couple of times: the correct operating system for building a Web server is Linux or BSD. It's as simple as that. Windows Server is the correct tool for many things (particularly with Active Directory, which frankly is peerless for managing accounts, objects, and policies—OpenDirectory and other competitors are just laughably bad at scale) but building a Windows-based Web server is like bringing a blunt butter knife to a gunfight. The Internet and the services that make it run are fundamentally Unix-grown and Unix-oriented. Playing in this playground means you need a Linux or a BSD server, full stop.

Technical issues aside, it's also a practical choice. You can acquire a Linux or BSD server installation ISO for free, whereas you need to spend some amount of money to (legally) get a hold of Windows Server, either through TechNet or through buying it outright. You can grab a time-limited trial of Windows Server 2012, but even so, being locked to IIS as a Web server (or dependent on crippled Windows ports of better Web servers) means you'll be playing in the bush leagues. IIS is found running many huge and powerful websites in the world, but it's rarely selected in a vacuum; most big installations of IIS are what they are because of external dependencies or other overriding reasons. We have none of those, and so there is no reason to use IIS.

So, Linux or BSD? That choice is probably an entire article in and of itself, but I'll keep it short: I'll be talking about using a Linux distro (that is, a Unix-style operating system composed of the Linux kernel and a curated collection of tools and packages) instead of a BSD variant (that is, a Unix-style operating system composed of a unified base system and tools and packages). There are a number of reasons for choosing to go with a Linux distro over a BSD variant but the most relevant factor is that Linux distros will be easier to install because of broader, better hardware support.

Ubuntu Server it is!

I'm a fan of grabbing the current long-term release of Ubuntu Server, which as of this writing is version 12.04. A Debian-based distro like Ubuntu has an excellent and well-maintained package management system (which is the primary way you'll be installing software) and the LTS releases will continue to receive regular security patches and kernel updates for many years.

Going with the server flavor of the distro instead of a desktop flavor means that you'll end up without a GUI on the system when you're done installing it. This is a good thing. Don't fear the command line! It's faster and more efficient to edit a few configuration files to get things up and running than it is to wade through screens and screens of preference panes, clicking on options that you have to visually identify, crippled by the lack of a quick way to search for what you want. GUIs are available for the server distros if you need that crutch, but we're not going to get into them—the command line is the best way to interact with your Web server and that's what we're going to use here.

You don't have to use Ubuntu, though—some folks have philosophical differences with Canonical and their operating system packaging choices, and there are alternatives. Linux Mint is another Debian-based Linux distro which is easy to use, though it doesn't have a separate server variant so its default installation will include a bunch of stuff you won't need for your Web server. You could also go straight to the horse's mouth and grab Debian, though distros like Ubuntu and Mint exist at least in part because of Debian's extremely slow update cadence. The rest of this guide is going to assume you're using Ubuntu Server 12.04 LTS.

338 Reader Comments

Desktop Windows has come with its own webserver built in since Win98 for sure and probably Win95 but I never used it back then so I'm not sure.

Please correct me if I am wrong. Doesn't the non-server versions of Windows deliberately limit its TCP/IP performance to make it ineffective as a server machine (completely independent of IIS or what type of server we're talking about) ? This was true a decade ago. Has this changed?

The article presents a way, not the only way, to set up a web server. The way presented is very inexpensive, high performance, easy to replicate, virtualized so it can be moved to other physical hardware, and is what is very standard practice on the internet. IIS web servers, even on powerful hardware, are a minority in the real world. If it has not changed that Windows desktop OS limits TCP socket performance, then IIS on a desktop OS seems more like a toy web server. Nothing wrong with that as a way to learn and experiment. But what the article presents is typical real world practice.

You may allow up to 20 other devices to access software installed on the licensed computer to use only File Services, Print Services, Internet Information Services and Internet Connection Sharing and Telephony Services.

You may want to google epistemic closure. I've used both Linux (Apache) and IIS (occasionally) since the mid-90s, so I'd like to think I have *some* idea. To just host mostly static stuff I would certainly go with a *nix. For a major software development effort I would not. It's true that IIS used to be a disaster, as would Windows, but they've come a really long way while not all that much have happened on the other side of the fence. IMO.

Bullshit. Amazon.com is running a minor software development project on a unix variant.

Definitely. There's no way that I would run a major project on anything but a Unix derivative, and that's because there's usually more to a major project than a dynamic webserver plus database. Clusters of web servers (front-end, back-end), replicated databases (often different datastores for different usage needs, e.g., MariaDB + Cassandra + Redis), reporting, monitoring, scalability, high availability, collaboration service (ZooKeeper), automated build and deploy (maven, custom scripts). All these things, they run best, and for free, on a *nix.

I like nginx, but I think most people push it over apache out of ignorance. Yes, it is more efficient for static files and very useful in other configurations, but for PHP-based sites, much of the memory impact comes from having multiple the PHP interpreter and libraries, and you'll pay for it whether you are using mod_php with apache or php-fpm or similar with nginx.

If I were doing a how-to for beginners, i'd suggest apache, since that's what's most widely documented and understood.

This not a problem of Apache per se, but a single tier web server (which is the default Apache configuration) should never ever ever be facing the public internet, unless it's serving just static content.It will always be too vulnerable to intentional or accidental client starvation or excess of dynamic requests.

A web server serving dynamic content facing a public internet really needs two tiers: a lightweight front end capable of handling large amounts of connections and a heavy weight backend which processes the requests.

There's a gazillion possible configurations. A stripped down event MPM Apache with mod_proxy acting as reverse proxy for a normal worker MPM Apache running mod_php will do just fine; ngix and FastCGI PHP will do fine; etc.

You can just use Varnish as a cache instead of this complication. When you need performance cache and shard. That said I agree with you and OP regarding Apache over nginx for good documentation and for it being good enough for almost everyone. Setting up your own personal server is a great way to learn nginx if you've already used Apache professionally. So either or

As for future entries into the series, I'd like to see a recommendation on keeping the database and application servers on different boxes or VM's. There is additional overhead in terms of setup and resources but often pays off in trouble shooting issues in the long run. Keeping these in separate VM's enables allows easier expansion later on if the sites do become popular.

I'd planned on describing the database and application setups as all going on the same server (and using unix sockets to communicate where applicable instead of TCP), but I'll absolutely mention spinning them off onto separate phys or VMs as an option, with the pros and cons of each.

Quote:

You can just use Varnish as a cache instead of this complication. When you need performance cache and shard.

I've got Varnish running (and have blogged a lot about it), but I'm debating whether or not to conclude this series with a "How to set up Varnish Cache" article. It might be a little too esoteric for most. I mean, if there's interest, I'm happy to get into it, but it's a little fringe.

How do you set up a secure web server? Don't use the dev branch of the web server product you are using. Do use a virtual hosting provider which specializes in preconfigured, secure by default, server images.

Want to run an unsecure hobby website in your closet and don't care if that box is hacked? Follow Lee's advice, but put it on the other side of a hardware firewall, enable the firewall on all the other machines in your house, and enjoy paying $20 extra a month in electricity bills.

If you go *nix only without saying IIS is pretty robust/secure out of the box, then it's really not a good article imo.

Security and robustness can't be discussed in a vacuum, though. The Internet is fundamentally a Unix-y construct, and if you want to learn about how it works, doing it with *nix technologies is the way to go. A *nix operating system is absolutely the right tool for the job here. (Though there's a holy war to be fought over whether running a web server based on a Linux distro is better than a web server based on a BSD, and Nginx is really a BSD-minded application from BSD-minded folks.)

Besides, I can't think of a way an average person can get up and running with a Windows box + IIS without some outlay of money (disregarding mooching a copy of Windows from someone else). Doing it with Linux in a VM costs zero dollars.

Agree with your point about the value of doing this via something Unix-y....but if one is simply "experimenting" with a low-volume web server and wanted to use Windows + IIS, one could do it for free (and legally) with the Amazon Web Services (AWS) free tier. Not the mot powerful box in the world, but I use it when teaching classes to give students access to their own Windows 2008 Server (and now 2012) box.

Want to run an unsecure hobby website in your closet and don't care if that box is hacked? Follow Lee's advice, but put it on the other side of a hardware firewall, enable the firewall on all the other machines in your house, and enjoy paying $20 extra a month in electricity bills.

Wrong on Every. Single. Count. BZZZZZZ.

Taking basic precautions means that you're as safe from malicious folks as you are at any hosting provider. How exactly do you think a "hack" happens? Modern web servers like Nginx and Apache have very, very few undiscovered vulnerabilities, and whether you're at a hosting facility or in your closet, you're equally vulnerable to those (and someone with an undisclosed Apache or Nginx vulnerability is likely to be a state-sponsored hacker anyway, and they have more important things to do than poke at hobbiest sites). The operating system itself may or may not have holes, but limiting exposure to the internet to ports 80, 443, and maybe 22 takes care of all of them. sshd is mature and has no known vulnerabilities; the only way your box can be compromised through that route is through a stolen password & key, or though brute force, which we've done everything we can do to guard against (and, again, whether you're at home or hosted somewhere, you're equally vulnerable).

Introducing php and the ability to execute scripts and generate dynamic content brings additional vulnerabilities, but they can all be easily mitigated. You don't have to be some kind of genius sysadmin God to keep out hackers--you just have to not be an idiot and don't run four year old Wordpress or MediaWiki.

The little foxconn box I recommended draws less than 20 watts when under the kind of low load outlined here. That's not even noise, but if it's too high for your taste you can do better.

You know what I get the most break-in attempts from? IP addresses that resolve back to offshore hosting companies and university computers. The myth that any home server you set up will be h4x0r3d is exactly that--a myth. It's given legs by lazy or ignorant folks who set something up and leave their pants down.

Again, by all means, if you want to do something serious, do it hosted. You'll be better off that way, if for no other reason than you won't have to worry about backups and power outages and operating system updates. But saying "IF U HOST IT AT HOME U WILL GET HACKED LOLZ" is just fucking stupid.

FWIW, my closet server runs five sites (including the chronicles of george) and has weathered a couple of Ars front page links.

OMG Lee I'm dying here at work trying not to laugh out loud. This is hilarious!

Great article, thanks! I agree with the approach you took, but I also think there should be a bit mentioned about IIS for multiple reasons. One is the previous comment that said something like "Thanks but this is way over my head," in which they expressed interest but were probably turned off by how foreign this process is to people who only use windows.

Another reason is that some people may be interested in .NET already, and maybe already have their own windows host, but want to explore building a test server at home or self hosting what they already have.

With one automatic web install on my Windows 7 machine, I installed IIS 7.5, with a firewall setup and SSL enabled, Fast CGI PHP setup, and ASP.NET plus SQL Server 2012 setup and ready for use. One click of the button. I run linux servers from home as well but setting up ngix, PHP, MySQL, and Mono took me about two days to get the same functionality that took me two hours of downloading and installing from the Microsoft site.

Windows Server being insecure is one of those internet myths that no one has been able to prove. Just because Windows has more malware than Linux does not mean IIS is less secure than Apache.

Anyways the Fortune 500 is pretty IIS heavy and the author didn't explain why you can't use Windows Server. Which service exactly is missing? Name the Unix service that Windows Server is unable to offer or change the article.

I went on mission to reduce electricity in my home setup. I found a cheaper 525 ATOM rackmount at newegg with multiple Gigabit LAN ports, hooked it up to a VLAN capable switch. Loaded VMWave on the machine (Free Version) and consolidated my Astaro Firewall (Free Home Version) and Unix box for Database, Java CI, Web/JVM server. For large storage I have a Synology NAS which ended up providing storage far cheaper than any home built low-power/small form factor solution.

I feel very comfortable putting things on the public internet using that setup, though if it something other people will hit in any meaningful way I'll pop it up on a Rackspace Cloud VM for less than $15/mo.

I understand that this was focused on installing Linux, which is a fine choice, and better in some cases, but I don't see how you can say that it's superior or act like it's the only "real" option for hosting your own website. It's certainly not always easier, especially if you've already got a Windows machine up and running, or you're a novice that doesn't feel comfortable without a GUI.

Plus, it depends on what you want to run. I code almost exclusively in ASP.NET. Sure, there's mono, but that's like using Apache on Windows; if you need ASP.NET, it's just a better idea all around to use Windows/IIS. Plus, with the Web Platform Installer, it's stupidly easy to add support for PHP, Python, MySQL, or MS SQL Express.

Also, for whoever said that XP is the most common Windows OS out there, the facts don't seem to agree with you. Every report I can find shows that Windows 7 has pulled ahead, and Windows 7 ships with IIS 7, and is dirt simple to install.

For the tech n00b, or just the guy who doesn't want to mess around in the shell and in config files all the time, the ease of set up and configuration of Windows/IIS is just miles ahead of setting up a Linux web server. The fact that there's a GUI for just about everything, and that it's right in Windows, an OS more people are familiar/comfortable with, gives it a distinct advantage over Linux.

Don't get me wrong, there are a great many cases when using a Linux server for web hosting is the best choice, but it's not always the best choice, nor is it the only, or easiest choice, as this article seems to infer.

I understand that you can't talk about all options available in one article (at least, not in any reasonable amount of detail), but I think you're doing readers a disservice by steering them away from Windows/IIS instead of simply saying "Windows can also be used, especially in cases x, y, or z, but for this article, I'm going to show you how to set up a Linux/nginx box for web hosting."

I think you've ruled out IIS too quickly. For the typical home user wanting to tinker with a web site.., if they have windows, they just need to activate IIS. It comes in most flavors of Windows, not just server. A novice user could have a website up and running in minutes. It's pretty easy to operate and there is a lot of support. I especially like the platform installer which lets you easily install things like wordpress, php, mysql. All free. You can still build your own secure website without putting too much effort into the underlying mechanicals.

I use Windows, .NET, IIS, SQL, and VB at work and at home when I'm tinkering. I do most of this on Visual Studio, which makes development and deployment ridiculously easy. Is there a good equivalent of this kind of setup for *nix-based servers? Could I still use Visual Studio on my development machine and push to a linux server?

I use Windows, .NET, IIS, SQL, and VB at work and at home when I'm tinkering. I do most of this on Visual Studio, which makes development and deployment ridiculously easy. Is there a good equivalent of this kind of setup for *nix-based servers? Could I still use Visual Studio on my development machine and push to a linux server?

Sure--I hadn't planned on getting into that, but webdav is one option, if you want to set it up. Nothing beats the convenience of having everything on a single box, though, and if IIS works for you in that role, then you should keep using it!

Want to run an unsecure hobby website in your closet and don't care if that box is hacked? Follow Lee's advice, but put it on the other side of a hardware firewall, enable the firewall on all the other machines in your house, and enjoy paying $20 extra a month in electricity bills.

Wrong on Every. Single. Count. BZZZZZZ.

Taking basic precautions means that you're as safe from malicious folks as you are at any hosting provider. How exactly do you think a "hack" happens? Modern web servers like Nginx and Apache have very, very few undiscovered vulnerabilities, and whether you're at a hosting facility or in your closet, you're equally vulnerable to those (and someone with an undisclosed Apache or Nginx vulnerability is likely to be a state-sponsored hacker anyway, and they have more important things to do than poke at hobbiest sites). The operating system itself may or may not have holes, but limiting exposure to the internet to ports 80, 443, and maybe 22 takes care of all of them. sshd is mature and has no known vulnerabilities; the only way your box can be compromised through that route is through a stolen password & key, or though brute force, which we've done everything we can do to guard against (and, again, whether you're at home or hosted somewhere, you're equally vulnerable).

He's not wrong on every account at all. Firewalling off your web server from the rest of your LAN is a sensible and prudent approach. 80 & 443 in from the web, 22 in from the LAN, nothing else needed for a basic set-up.While you're right that very few compromises come from the OS or web server, if you're developing your code on this box then sooner or later you'll get complacent and leave something vulnerable, or install a module with some known vulnerability. As any internet facing web log will show you, there's endless scripts out there probing for vulnerabilities. Usually the worst that'll happen is a defaced site, but if your web server is on your LAN then there's always the possibility of worse. Setting up a simple DMZ is easy these days. Why take the risk?

Also, starting your post with: "Wrong on Every. Single. Count. BZZZZZZ" is rather childish and damages your credibility, even if the rest was sensible. Rather like the anti-IIS stuff in the original article.

Plus, it depends on what you want to run. I code almost exclusively in ASP.NET. Sure, there's mono, but that's like using Apache on Windows; if you need ASP.NET, it's just a better idea all around to use Windows/IIS. Plus, with the Web Platform Installer, it's stupidly easy to add support for PHP, Python, MySQL, or MS SQL Express.

You absolutely can install those on Windows and I've spent some time with PHP+MySQL on Windows (or even PHP+MSSQL) but there are additional barriers. It's actually more complicated these days than it used to be, IMO, if you add Apache to the mix. You need to match VC++ compilers (and have the proper 32/64 bit runtime) on apache and PHP binaries and that involves using apachelounge.com builds instead of the mainstream ones. If you have to add SSL aware PHP modules, then you have to get binaries built with the same openssl version too. It's not all point and click installers and in some configurations you're manually moving DLLs around so stuff's in the right path. If anything is wrong, it'll crash hard. Sometimes you'll get log entries but other times you need to run the server from cmd.exe to even see what's happening.

If you run .NET based sites, then Windows is the right choice. You can probably shoehorn it into mono, but it's more trouble and most tutorials or documentation you come across will assume a Windows environment. If you're running PHP or Rails or any of the Java or Python frameworks, a unix-y environment is easier for pretty much the same reasons.

I host my own domain from my apartment (Apple Web Server) but for me the real limit is my residential upload speed with Time Warner. No streaming videos are able to be served, but it's OK for info-pages with light graphics.

There is a satisfaction of building and hosting it yourself, I'll admit.

You absolutely can install those on Windows and I've spent some time with PHP+MySQL on Windows (or even PHP+MSSQL) but there are additional barriers. It's actually more complicated these days than it used to be, IMO, if you add Apache to the mix. You need to match VC++ compilers (and have the proper 32/64 bit runtime) on apache and PHP binaries and that involves using apachelounge.com builds instead of the mainstream ones. If you have to add SSL aware PHP modules, then you have to get binaries built with the same openssl version too. It's not all point and click installers and in some configurations you're manually moving DLLs around so stuff's in the right path. If anything is wrong, it'll crash hard. Sometimes you'll get log entries but other times you need to run the server from cmd.exe to even see what's happening.

I've set up dozens of WIMP servers without running into these issues. Especially with the Web Platform Installer, which IME actually makes it quite difficult to screw up setting up a basic WIMP server, even with SSL support and extra modules. If you want to play with PHP on Windows, it's dead simple, unless you want to use Apache.

If you're talking about a WAMP setup, then I agree with you. If you want to use Apache, just set up a Linux box; there's less problems than Apache on Windows, again IME.

Win arguments much that way? Childish. I expect more from a story author.

Pokrface wrote:

But saying "IF U HOST IT AT HOME U WILL GET HACKED LOLZ" is just fucking stupid.

Good, because I didn't say that. I mean seriously, are you 12?

I made two points:

1) Don't use a dev branch, it's more likely to have bugs and or exploits than the most up to date stable branch. This should not be a controversial idea.

2) You are better off going with a provider that specialized in pre-hardened server images, than you are installing and configuring your own distribution. You can also buy packages where the latest fixes and patches are applied automatically, so that as exploits are discovered and fixed, you are protected.

Look, you like doing your own thing in the closet. That's great and it works for you, but it's not a safe model for the masses. It's not the most secure way of doing things for the average user, and it's probably not even the cheapest.

Regarding SSH access, you can change the listening port for SSH in your web server to further reduce the number of automatic logon attempts. Most of them are configured to poke only at port 22, so doing this simple change would save you a lot of trouble. This can be done by changing the Port option in /etc/ssh/sshd_config.

This x1000.

I have a webserver running in my basement and prior to changing the SSH port I had DenyHosts running, it would add several entries daily. Once I moved the SSH port to something else I haven't had a DenyHost entry in longer than I can remember.

FYI, nginx "devel" vs. "stable" difference mainly is about API and behavior stability. Both branches are reliable enough to use in production.

For casual use, there'll be no problem. I've been a casual user of Nginx dev for almost a full year now; I initially switched to get TLS 1.1 and 1.2 support, which stable didn't add until pretty recently.

It's not like running nightly builds, and it's perfectly fine for a tinkerer. If this was an e-commerce site, I'd be a lot more worried. But Nginx dev builds are not unsable wrecks and they tend to not have significant bugs. There's always risk, though--you're right in that if something is going to go wrong, it'll likely go wrong in dev first because dev has more features. Still, my anecdotal experience (which is worth exactly what you're paying for it! ) has been that dev works perfectly fine.

Well, I've been using Debian unstable as my primary desktop for over 10 years and it's not a unstable wreck either -- despite the name, Debian package maintainers do not package nightly builds or experimental versions into unstable.But now and then, stuff breaks. Sometimes it's bug. Sometimes it's behavior changes (ie, a different default) or an API change and a module that wasn't upgraded to the new API yet or...Now and then, I'll have to fix something by hand after an upgrade or hold off an upgrade until something gets fixed.But that's it rolls and it works for me.

My hobby web server runs Debian stable though. It's ancient, it's not bug free (there are actually some known unfixed bugs in it). But I know I can just apt-get upgrade the system, with 99.9% blind confidence it won't break anything. And so I do, regularly.

You may have not experienced that with your system. Or maybe you did, but you just fixed it and don't even remember. Maybe you're one of those tinkerers that does not mind dealing with those occasional issues and that is just fine for a hobby server.

But other users may end up being frustrated, leave their systems without updates and then they'll eventually get hacked. And that isn't fine, even for a hobby server.And I've seen plenty of that among fellow Linux enthusiasts.

You absolutely can install those on Windows and I've spent some time with PHP+MySQL on Windows (or even PHP+MSSQL) but there are additional barriers. It's actually more complicated these days than it used to be, IMO, if you add Apache to the mix. You need to match VC++ compilers (and have the proper 32/64 bit runtime) on apache and PHP binaries and that involves using apachelounge.com builds instead of the mainstream ones. If you have to add SSL aware PHP modules, then you have to get binaries built with the same openssl version too. It's not all point and click installers and in some configurations you're manually moving DLLs around so stuff's in the right path. If anything is wrong, it'll crash hard. Sometimes you'll get log entries but other times you need to run the server from cmd.exe to even see what's happening.

I've set up dozens of WIMP servers without running into these issues. Especially with the Web Platform Installer, which IME actually makes it quite difficult to screw up setting up a basic WIMP server, even with SSL support and extra modules. If you want to play with PHP on Windows, it's dead simple, unless you want to use Apache.

If you're talking about a WAMP setup, then I agree with you. If you want to use Apache, just set up a Linux box; there's less problems than Apache on Windows, again IME.

Yeah, Apache is where most of the issues come in. The official builds at apache.org still use VC6 and the newer builds of PHP use VC9. So you either use an ancient-ish PHP 5.2 build (which doesn't mesh well with the "secure" concept) or muck around with the less user-friendly apachelounge.com builds.

IMO you still run into issues eventually. Major packages like Wordpress or Drupal or MediaWiki tend to assume a *nix environment in their instructions (frankly, even nginx can be a problem sometimes when the default releases include .htaccess files). If you need a PECL extension that isn't shipped with the default binaries, getting your environment set up to compile them in Windows is a lot more involved. If you know what you're doing, you can make it all work but *nix is probably the path of least resistance.

Win arguments much that way? Childish. I expect more from a story author.

Pokrface wrote:

But saying "IF U HOST IT AT HOME U WILL GET HACKED LOLZ" is just fucking stupid.

Good, because I didn't say that. I mean seriously, are you 12?

I made two points:

1) Don't use a dev branch, it's more likely to have bugs and or exploits than the most up to date stable branch. This should not be a controversial idea.

Page 2 of the article toward the end of "Installing Nginx" section

Quote:

We're opting to install the development version of Nginx. Normally, installing development versions of software means installing something unstable and unsupported, but with Nginx the dev versions are fine for personal use. Valentin Bartenev, one of the project maintainers, made this comment earlier in the year on the Nginx mailing list:

Quote:

FYI, nginx "devel" vs. "stable" difference mainly is about API and behavior stability. Both branches are reliable enough to use in production.

joshv wrote:

2) You are better off going with a provider that specialized in pre-hardened server images, than you are installing and configuring your own distribution. You can also buy packages where the latest fixes and patches are applied automatically, so that as exploits are discovered and fixed, you are protected.

Look, you like doing your own thing in the closet. That's great and it works for you, but it's not a safe model for the masses. It's not the most secure way of doing things for the average user, and it's probably not even the cheapest.

Page 1, paragraph 3 of the article

Quote:

It's super-easy to open an account at a Web hosting company and start fiddling around there—two excellent Ars reader-recommended Web hosts are A Small Orange and Lithium Hosting—but where's the fun in that? If you want to set up something to learn how it works, the journey is just as important as the destination. Having a ready-made Web or application server cuts out half of the work and thus half of the journey. In this guide, we're going to walk you through everything you need to set up your own Web server, from operating system choice to specific configuration options.

Rereading the article to see if your points were answered would probably have changed your arguments, as each of your 3 points were anticipated and answered.

This is NOT a system that is safe for high security use. This is clearly stated in this, the first installment of a multi-part, tutorial. Several other items that are important for a really useful home server were glossed over or simply mentioned with a promise of coverage in a later installment of this series.

Thank you for a good article. However I beleive that the days to setup dedicated servers in this fashion are gone. This is good stuff for sys-admins but if your goal is to get a website up and running, then you want to avoid having to deal with config files and what not. These days it is possible to get shrink-wrapped solutions that take care of all this setup stuff for you while remaining just as flexible, if not more. It is an oft-repeated task and therefore best left to automation.

I few years ago I went through the whole process of setting up a LAMP stack; apache isntall and configure, php, mysql and in the end realized that using XAMPP would have been a lot simpler. These days the solutions are even better. These days I believe no server setup should be done on bare-metal; they should all be VM-based - either as a guest OS, (mostly for experimentation), or as a VM on a hyper-visor if you have a dedicated machine.

Some of the solutions I found / experiemented with are:

Dedicated machine hyper visors: proxmox-ve, citrix xen server, esxi, (Free version of MS hyper-v is command line only, server version (if you have it) is good though).

Server OS: ClearOS, SmartOS, TurnkeyLinux,

And you definatly need a GUI for configurating raw servers. napp-it is one (don't know if it is available for Linux, or just use ClearOS).

I have got an old Athlon X2 setup with ProxmoxVE and this lets me experiemnt with all sorts of servers - some for experieenting, some for serving specific websites. You can bypass the whole configuration scenario altogether, for the most part, or just need to do it once and store that image.

Something low-powered like the Raspberry Pi would be more than powerful enough for a 'personal' website - which if you're hosting at home, possibly using a DynDNS service (if your home IP is dynamic), you're not going to be pushing the hardware limits for a long, long time.

A full-blown PC server, which eats electricity for lunch, is completely unnecessary.

I can concur with this. Ever since my experiments with the home server setup, I am increasingly being convinced that I do not want to have more big machines than absolutely necessary. They consume power and make noise and occupy valuable space. I am increasingly looking for smaller and leaner solutions. I built a NAS box, with dual nics in teaming and WD RE drives, and ultimately ditched it in favor of a simple external USB drive attached directly to the router. The money spent on the router upgrade was well worth it. For more server tasks, a simple low power machine with a lean hypervisor (e.g; ProxmoxVE) would be all that would be needed. That gives enough flexibility while being 'practical' for a home setup.

Lee Hutchinson / Lee is the Senior Reviews Editor at Ars and is responsible for the product news and reviews section. He also knows stuff about enterprise storage, security, and manned space flight. Lee is based in Houston, TX.