Posted
by
samzenpus
on Wednesday January 04, 2012 @10:05PM
from the up-and-coming dept.

tsamsoniw writes "With financial backing from the likes of Michael Dell and other venture capitalists, open source upstart Nginx has edged out Microsoft IIS (Internet Information Server) to hold the title of second-most widely used Web server among all active websites. What's more, according to Netcraft's January 2012 Web Server Survey, Nginx over the past month has gained market share among all websites, whereas competitors Apache, Microsoft, and Google each lost share."

The Server in userland may as well be BeOS, ReactOS, Haiku, or, for that matter, even WinNT.

So no, BSD is BSD, MACH is MACH, and MacOS X is Mach running a BSD Server in userland.

** A similar arrangement, and probably inspired by Mach is the "personalities" of WinNT (for those who rememeber there were the DOS personality, OS/2 Personality, Posix personality and Win 32 Personality).

It gives me such a warm feeling to see that 'SHOP DIRECT' has finally saved enough to upgrade 350 of their servers. Give them another decade on their remaining Windows 2003 boxes and they might be able to save enough to finish the project...

And 'bravo' to the two other companies that could finally afford to upgrade to Windows 2008 by firing a bunch of IT staff to pay for licenses to upgrade to Windows 2008 (which is already 4 years out of date).

I was wondering, how does Microsoft track your posts so that you can get paid when you post anonymously like that? Can't any of your fellow shills claim that post as their own and take your money? Or do they give you some kind of monitoring software so they can track who posts what or something like that?

The metric "Apple aficionados" use is the one where the iPhone is the top-selling handset. For some reason, you're comparing a phone to an operating system. If you actually compare mobile operating systems, iOS has more share due to iPads and iPods.

The share in Japan, Germany, Russia and many other countries already lies below 4% for many years. But also traditionally Microsoft-friendly countries can turn away from IIS, for example in the last 10 years, the share in France fell from 35% to 5%, in Brazil and Taiwan from over 45% to 15% and in India even from 65% to 18%.

IIS will probably be able to hold out another 10 years, but in the long term it's future is far from rosy.

That article is looking at a type of webserving where the webserver using a substantial percentage of hardware. There are a huge number of applications in which the number of clients is vanishingly small but there is a strong desire to offer integration with other services, primarily intranets. In other words IIS is likely to thrive is where it always has, in the intranet where you want a non standard value add webserver solution.

Unless they fuck it up pretty horribly, IIS has an assured future as the embedded webserver for default installs of the various Microsoft products that have a web-facing component(ie. Foocorp installs exchange, exchange.foocorp.com/owa is going to be providing 'Outlook Web Access' via IIS... Ditto with Sharepoint and similar).

If Microsoft has plans for people to actively shell out a nontrivial amount of money just to run a commodity HTTP server, though, they'd better have something good in mind, especial

IIS is less focussed on high traffic web sites now. The main uses are for things linked to other MS services like Exchange and for serving vertical apps developed rapidly in ASP.NET. If you look at those markets it still does well because it is the only option.

You meant that sarcastically, but it isn't entirely unreasonable to drop free webservers and look at profits as a different way of accounting. It does tell us something that for a long time with so many good free ones that: IIS, Oracle Weblogic (BEA), IBM's , LiteSpeed, riverbed... are still sold. And the reason interestingly enough is generally the same. Integration.

In both the case of iPhone over Android and commercial over free webservers it appears that customers consider integration / ease of inte

Time for some legitimate competition for Apache, it's been a long time. A bit of competition, which they haven't had much of could help them too. I am curious if it is easier to configure than Apache, and how well does it integrate with a JEE containers for serving the static content?

But what a fucking name though.:) I found out it's supposed to be pronounced engine-x. Until I found out I called it enjinx. Reminds me of that movie 'That Thing You Do' where the band called themselves the Oneders at the beginning and everyone called them the 'oh-nigh-ders' or 'oh-need-ders'. Then they got a manager and he forced them to change it to the Wonders because it didn't look garbled. So I say, why not just call it EngineX. It still sounds cool and doesn't have that annoying 'I'm trying to look cool' thing going at the same time. Regardless, sounds like a good product.

The only thing that makes me dubious is that they're based in Russia, I hope Putin and his boys don't have a back door into it. But America is starting to look no better than Russia these days in terms of a government that actually cares about the people. Have you checked out the NDAA that Obama signed this week? It lets the American military arrest civilians inside America (heck Fox news AND democrat supporters are all screaming bloody murder about this one). So on second thought I think I'll give this jinx thing a try.

Having never heard of the server in question until this/. article, when I read the summary my mind parsed the name as alternately 'neh-GIN-ex' or 'en-GIN-ex' with 'GIN' pronounced like in 'begin', not like the drink.

Using a solitary 'N' as a syllable at the beginning of a name is ambiguous as to whether the implied vowel should be at the beginning or the end. For example, when I first saw its name printed, I thought the graphics card manufacturer was pronounced 'neh-vid-ee-ah'.

I actually did the test, passed all my local config from apache to nginx, the config approach is quite different, it took me like 3 day of getting it 30-60 minutes to config ngix all the things that apache had working but the final config was about 20 lines long, oppossed to a much bigger apache config that has taken year to "master".

Needlss to say, been happy using nginx for a couple of months now on local testing env., still neeed to port production sites:)

nginx uses a different strategy for how it utilizes hardware. It is not so much a configuration problem as a density problem. For things like webhosting companies, which are a huge percentage of websites, nginx cuts costs.

I'm glad such a program, well designed and programmed in good old C, is rewarded with trust and confidence from more and more engineers.
I have been using it for two years, serving several professional sites, and the transition from the initial Apache setup was surprisingly smooth.

What I like in particular, compared to Apache:
- fantastic performance gain, in terms of cpu and memory
- maintenance gain: the configuration appears (at least to me) to be more "developer like", and easier to configure/extend with many options
- load balancing is... really a piece of cake

The only drawback I (initially) found was the lack of a PHP embedded/module. But using php-fpm happened to be a good alternative, via a local port.

Most of the time you see a server error, it means that the web server is working but something else (often the database, more often the fastcgi or whatever) has failed. It's when you don't see any error that the web server itself is broken.

Since Apache 1.x is now officially unsupported, OpenBSD has imported nginx into the base system and it will be the included web server in a near future release. I've been using Lighttpd for a few years because it was lighter-weight and easier to configure. Development seems to have stalled in the past couple of years though, and nginx looks like a promising alternative.

You could equally say you see a lot of Apache errors because of broken PHP code, or people sending you mistyped links (hint: that 404 error may not be the web server's fault).

nginx is very often used as a front-end to code writting in other systems like node.js, rails, and so forth: nginx serves static files directly and the other systems serve dynamic content. If the back-end is too busy (or actually broken in some way) nginx won't be able to drag dynamic content from it and will have to report an error instead.

Most sites are configured to hand out a generic 500 "server error" instead of anything more specific like "some fool missed a semi-colon near line 328", as giving out meaningful internal error messages on public facing interfaces is considered a potential security problem (it can make injection flaws easier to find), but don't have custom message HTML so said messages have "nginx" plainly visible on them, so I can understand some confusion on this point (though the same thing affects other web servers too, obviously).

I am quite surprised. nginx may be a good product, but it's also lacking a lot of functionality that a web server used as a load balancer or cache should support. For example, it doesn't support HTTP 1.1 to the backend, thus it can't do name based virtual hosts on the servers it caches.

I *WANTED* to use nginx for a large multi-tennant website we were building, but it didn't support it.

I'm firmly convinced the main reason IIS is even in the top 10 is because so many large corporations sign secret agreements with Microsoft to get discounted software in exchange for not using "free" or "open source" software. No joke -- I am working at a company right now where it is banned, and the only reason given is either that "info security" said so, or "legal" did. But when pressed, nobody can quite identify why. It's just policy, and nobody questions it. IIS' market share is vastly inflated; If it weren't for these clandestine agreements, I sincerely doubt it would be deployed very often, even WITH all the MS tech tie-ins, there's too many compelling reasons not to use it. Even Microsoft doesn't use it on it's major websites because it doesn't scale and it is prone to failure.

Bullshit. Microsoft uses it on every site they have. The only reason that "web server identification" surveys like Netcraft say they run Linux is because, like all large websites, they utilise the services of a CDN such as Akamai.

And there are no "secret agreements". Most of the time the company forbids such things is because there is no support, or because there is no ability of the in-house technical support to provide assistance with it. We're a very large IT company here and we have maybe 3 RHEL servers (because Linux was the best option for the task) and a couple of thousand (including virtual) Windows servers. (There's also about 2 Solaris servers, 4 or 5 Oracle Linux servers, a SCO Unix server and 2 or 3 HP-UX servers). None of this is due to any "secret agreements". It's all because there's one person trained to work with Unix based systems, and about 8 to deal with Windows. We utilise quite a large number of open-source packages across our infrastructure if it's the best tool for the job.

And there are no "secret agreements". Most of the time the company forbids such things is because there is no support, or because there is no ability of the in-house technical support to provide assistance with it.

There is a difference between white-listing certain applications because that's what a company decides to support, and adding to this list whenever there is a need, and a blanket ban on free or open source software where it's banned for being free and/or open source.

The first I can totally understand. You can't support just everything.

The second totally not. There is no reason why a software packed becomes easier to support by an IT department because the source code is not available.

In my last job, we had a client with an all Windows environment. We're talking 2 DCs, a file server, an exchange server and a dedicated IIS server on the other side of the firewall and off the domain.

One day, they decided to revamp their static HTML website (this was a government department trying to justify their existence, IT wasn't exactly at the top of their list). We talked to the outfit doing it, who told us they were using PHP. Great, I though. We can get rid of an old and outdated Windows server and replace it with a nice, lean little Linux box. Nope, I was told to install the PHP ISAPI module on IIS, because "we were a Microsoft shop", even though this server was quite literally doing nothing but serving up HTML and chewing up an unnecessary Server 2k3 license. So after much fighting, and arguing to explain that we may as well NOT go through the trouble trying to set up and debug PHP as FastCGI, another guy went behind my back and stuffed up the install, leading to me wasting 3 or 4 hours rolling it back and installing it properly. Anyway, it's all smoothed over, until I get the zip file I've been promised by the "website makers". It was indeed a website, 10 or so DreamWeaver files with the extension renamed to PHP. No Drupal theme, no Joomla install, nothing. -.- God I hate the people in this industry that like to sell themselves as professionals

If they already had the Server 2k3 license paid for, what reason would they realistically really have to drop it for Linux when they can do what they want with what they have? The deed is already done shelling out for the license, putting Linux on it is just like pouring salt in the wound. Anyway, as you said, the site is just from a government department trying to justify their existence, I doubt it really matters what OS is serving up the content. It's not like they're Google or something.

...go through the trouble trying to set up and debug PHP as FastCGI...

I think it's funny how common it is for anyone who mentions working with Windows professionally on Slashdot to be called out for being inexperienced or some kind of unauthentic system administrator with no real skills, but no doubt there are just as many who consider themselves experienced *nix system administrators who I could make fun of for being inept at basic Windows administration tasks.

Anyway, there are plenty of good reasons that web server should have been a Windows box. Even if it wasn't joined to the domain by switching that box to Linux they would lose the ability to leverage their existing update (SUS) and backup infrastructure. Also, the cost of a Windows license for a small shop like that would pay for itself probably 3 times over if they had to even try to get some kind of professional support for the Linux box even once.

Also, the cost of a Windows license for a small shop like that would pay for itself probably 3 times over if they had to even try to get some kind of professional support for the Linux box even once.

A Windows license doesn't magically come with professional support. And honestly, if you need professional support for a server *NIX is going to cost you the same as an equally competant Windows admin.

If you can't handle management of a web server in-house with qualified staff, you should move to a hosted solution. It will cost less regardless of OS choice.

It sounds like it probably should've been hosted by a 3rd party based on the GP's post, but the point I was making about support is that if I was a small shop without in-house IT and I need support for a specific issue and I open up the yellow pages and look at my options there are going to be way more options for me that will be less expensive if I need to hire someone to take a look at a Windows server.

I need support for a specific issue and I open up the yellow pages and look at my options there are going to be way more options for me that will be less expensive if I need to hire someone to take a look at a Windows server.

And most of those will be sixteen year old kids who think that the fact that they know how to find the control panel qualifies them to administer a windows server.

If you're looking for qualified, certified, experienced administrators, there are plenty in both worlds. The BSDs and Linux have dominated the server market for a long time; there's a very large pool of talent to draw from.

It's that exact agreement. They don't say "don't use free or open source software" they just say "don't use any of our competition". It throws in the whole microsoft suite (office, sharepoint usually in the face of wikis or better solutions, live365, etc), always with the argument of "we have a MS specialist to help you migrate" (even if that won't fix problems).

But it's all MS, so it all integrates and works together seamlessly, right? And it's all managed centrally using AD, right? I mean, that's got to result in significant savings, right? Oh, and the agreement covers all our computers and all this extra software that would cost us so much separately, so we're really getting a great deal on it. Please, MS Salesman, tell us something that's easy to believe and we'll sign.

Several years ago, GoDaddy switch all of their domain parking to IIS, explicitly to get microsoft's numbers up. Throw 10,000 cnames pointed at a single machine serving up parking pages, and boom - 10,000 websites running IIS.

Please stay firmly convinced...it will make it easier for me to recognize you as a complete idiot if I ever have to interview you.

More and more I'm finding the best way to recognize talent is to find people who understand how to apply the right technologies to a solution rather than the mindless ".NET is da best!" or "Windoze sucks!" I keep hearing from the typical zealots.

This has probably always been the case. However, using.NET means buying the entire Microsoft stack.

At my last job I wrote an entire back office in Java. When my company merged the decision was made, over my vehement protests, that we would recode in c# just to support a thick client that was the bread and butter of the traders at the other company. Literally everything had to be moved just because it had been marginally easier to code a desktop app in c# initially.

Microsoft makes some good stuff, they really do. But since MS stuff only works well, or at all, with MS stuff you may end up taking a heavy does of shit along with the good.

It's not about faith in the media, It's about faith in the inability of any company that is screwing people over to permanently silence millions of companies from saying anything anywhere. Hell they couldn't even manage to silence everyone for there patent agreements and we KNOW they were trying to do that, yet somehow we are supposed to believe they can successfully silence an exponentially larger group of people (many of whom dislike them) while at the same time screwing them over?

If Microsoft were making such clandestine deals it would be all over the press...

Microsoft was found in court to have made many clandestine deals and none of them hit the press at the time, that I can recall. I find it entirely believable that Microsoft is still today using the same tactics, as the only punishment they ever got for their wrongdoing was a pat on the wrist. Plus paying out $billions in private suits, but Microsoft just regards that as a business license.

The deals were public knowledge. The OEM deals of the time that resulted in the court cases were actually legal until MS was declared a monopoly. Companies have always made such deals and continue to make them, they were not anything special and only become a problem under monopoly rulings.

I've used a load of web servers in the last few years - an early verion of IIS when I had only windows many years back, apache, lighttpd, thttpd, netscape web server (showing my age) and various others... but I didn't even know this was out there.

Suppose it just shows how out of the loop I am these days. Computer stuff covers a vast field these days.

I was going to say the same. So I was pretty surprised. From what I am reading, it is more of a "front-end" system for web servers, that does things like caching and load-balancing. So I guess it sort of depends on ones definition of "web server".

I was also going to speculate/wonder if it was one of those "rigged" deals, like a few years back when IIS was declared as "overtaking firefox" and becoming #1 because "most web sites on the web used it". The actual reason was that GoDaddy (which hosts a vast majority of "parked" domains) was paid-off (or "otherwise incented" by Microsoft to switch to IIS. So when you considered a "www." to be a "unique site", and 99% of "unique sites" to be garbage parked-domains, IIS was not the leader.

So, I wonder if some other bizarre statistical work is at-play. For example, does someone like Akamai, who hosts a lot of other people's sites, use Nginx to skew these numbers??

no, this is genuine. it has been steadily gaining popularity over the past several years.nginx is being developed by a russian guy who up until recently was working (as a sysadmin, apparently) for one of the major russian web portals where nginx originated as an in-house project first but was open-sourced. the guy has now left the company (which has been slowly dying anyway) and incorporated an llc or something, focused on nginx. it was already quite popular in russia 5-6 years ago (when i was still living there).

nginx is an efficient event-driven front-end server, quite often used for loadbalancing in front of traditional apache or tomcat or whatever other backends, but in a simple case of a LAMP server it can be hooked up directly to PHP via FPM or FCGI.config syntax is quite expressive, with quite advanced uri / header - based rewriting capabilities. there is even a built-in Perl interpreter for more advanced use (which tends to be abused by people who forget what being an event-driven server means by sticking logic in there... oh well, people use things like node.js too *shudder*).

The article and summary are misleading, typical slashdot. Typically nginx is used as a forward cache engine, often on the same box as apache. People typically put apache on port 81, and nginx on 80, and configure nginx to cache from port 81...

Doesn't make it the number 2 web server. Yes perhaps the number 1 cache engine, but its generally not used as a web server.

Not always... for our Rails (and Sinatra) projects, we use nginx as the frontend/static asset server to a (pool of) Ruby-based application servers (mostly Unicorn). There's no Apache anywhere in the mix, and that has greatly reduced my migraines. Perhaps in some situation it makes sense to have nginx as a cache engine or load balancer for Apache, but in my world, nginx usually replaces Apache, rather than supplementing it.

You can easily find appropriate nginx rewrite rules for the major php apps like wordpress and menalto's gallery2. And for performance's sake it's all in the server config, so there isn't a disk access to read the.htaccess file to figure out if there are rewrite rules that need to be considered.

It's even better for apps, like django, which keep in mind delivering static files out of the hands of the app server. And configuring nginx to serve static files instead of hitting app servers is a piece of cake.

And for those really wanting max performance, there are plugins for direct access to memcache/postgres or even writing your web app directly in the config file with something like the lua plugin.

The article and summary are misleading, typical slashdot. Typically nginx is used as a forward cache engine, often on the same box as apache. People typically put apache on port 81, and nginx on 80, and configure nginx to cache from port 81...

You do know there is something called mod_proxy for apache?
Apache can be configured as a proxy or a web server.
Nginx can be configured as a proxy or a web server.
Your point is.. what, exactly?
I use nginx and I use it as a pure web server. I do not know what everyone else uses it for, but you can't just go about assuming whatever.

Nginx is a great loadbalancer for http which makes it quite suited as a frontend and thus getting counted by netcraft . There could be hundreds of apache servers behind it . E.g. on my boxes Nginx runs as a reverse proxy in front of about 20 different apache, tomcat, more Nginx, other servers that generate some kind of html. But these 20 will all be counted as Nginx while they actually run something different. So I beleive it is quite hard to actually say what Server actually is the most popular.

Good info, but not entirely complete. What I was told while researching webservers was that nginx also excels at serving static files. Personally I have a PogoPlug v2 serving three static HTML websites (static HTML is generated when I change something to make it look dynamic) plus a few binary files. I've never run Apache or any others, but the resource usage is extremely low, even under some load.

Yep, it's mostly used for front-end duties like connection pooling, load balancing, SSL offloading, gzip, that type of thing. If you're running PHP stuff, it's still debatable whether you want to go FCGI or PDM instead of Apache's built-in module. There are ups and downs in both cases and you'll have to see what works best for your site. At my company we use Nginx up front (with server type obfuscated) for SSL offloading and gzip and connection pooling. From there it goes into a varnishd cache on the same server (stored in 100% RAM) which handles the static stuff. Varnishd then forwards remaining requests to an L7 load balancer appliance type thing which then drops requests to each of 10 web "application" servers which are a combination of Apache with mod_php, Tomcat and Jetty Java servers. We've also used Nginx as an IMAP proxy and cache and it works quite well for that.

Apache has a good architecture but it's horrible at handling a lot of simultaneous connections and recycling them (that will change in 2.4 but it's not out yet). Also, if you're using mod_php, over time each Apache process will take the total maximum amount of RAM your php process uses, and many of our PHP applications use 128-256M of RAM or more (data management type stuff). So you can run a server out of RAM if you're trying to maximize connections.

Nginx can handle 10K connections on a little box with very little RAM due to the way it threads stuff. It's basically a copy engine and it's very fast. Varnishd can also handle a lot of connections and can serve up content straight from RAM in less time than apache takes to build a connection. That being said, Apache is reliable, and has I feel better logging at the moment and just more of everything. It's a reference implementation. It's actually fine for most purposes but if you're handling 1000 users simultaneously and they are making 10-20 connections each with various service calls and static downloads, you gotta have something that can pool the conenctions on the front end and handle static content or you're going to spend a lot of money on RAM. And if you're serving up static content with Tomcat, Tomcat is absolutely garbage. I think it has to boot the whole JVM to serve up your one file. If not that bad, it's still awfully slow, and it REALLY benefits from caching up front. BTW, Nginx does caching as well but varnishd seemed more mature and elegant.

Now lastly, you can just go out and buy an F5 BigIP and it does all this stuff on specialized hardware (Ok, special board, intel chip) and it's out of the box. But even the little ones are $20K which is a lot of software dev hours and/or web server/database/storage hardware. Would be nice and fun to have but if you can't spend the money on hardware (and training!) the nginx/varnishd frontend is pretty much the best setup in my book at the moment. A little complex but once it's set up you just let it run. I made an internal nginx cache for all our internal sites, including some Java apps (e.g. Jira) and with requests going through the cache everything just flies. If you use sharepoint on IIS, you would be prettty stupid to not try a cache server up front, it's amazing. If nginx fixed mod_rewrite stuff to be the same as apache, it would probably be possible to make it into an application server, and we're going to get a test environment set up with php-fpm [php-fpm.org] and see how it fares. We'll see how managable it is though.

Tomcat is absolutely garbage. I think it has to boot the whole JVM to serve up your one file.

No. When Tomcat starts up, that's your JVM boot right there. It stays running until you stop Tomcat. All the apps and files served by that Tomcat instance are served using the same JVM process. It will spawn extra threads for extra connections when required though.

In fact most other web stuff uses a long running process like that. It's mostly only CGI or 'CGI-like' configurations and PHP that work by starting up fro

the methodology [netcraft.com] for determining "Active Sites" only takes into account the structure of the html elements of the page. If the structure of the page stops changing its considered not active. javascript heavy sites don't require any html structure change to continue to provide changing content.

Good thing I've run both Apache and lighttpd for personal experience. And taught myself C, C++, PHP, Lisp, Perl, Python, and a little bit of Assembly. And MySQL. And how to run Linux from the command line. And... what the fuck am I paying this college for, again?

Yep, this. A CS degree these days is nothing more than a piece of paper that you often need to be considered for a job. It does not (necessarily) teach you the tools for the job. I am in the inverse situation to GP, my university taught only non-MS stuff (Java, Perl, LAMP) and my job now requires me to use Windows Server 2008 and IIS7. I had never even seen IIS before I came here..

I am the main server admin for a very large website that has been running Apache for 10 years. Then, last year, after a period of tremendous growth, we began to encounter serious memory/CPU issues with Apache. I had been researching alternative, light webservers for a while, so after thorough research and testing, we made the transition to nginx overnight with resounding success. We've never looked back! It is very easy to configure, ridiculously scalable and highly extensible. There are plenty of how-to guides and recipes for those moving from Apache. Nginx seemed like a no-brainer. Apache is a great reference server; it has every bell and whistle imaginable, but at a cost. Our site uses PHP, so for those wondering about PHP integration, we use PHP-FPM. I'm generally pretty conservative and slow to change our architecture, but looking back, we made the right choice.

Can you elaborate, please? I used to work at a webhost that used apache for around 500.000 websites, and memory/CPU was never a problem. (Not for apache, only for PHP and MySQL.) I often see people claim that Apache is bloated, but don't understand in what way (except possibly for the config files, that might might extensive but not really bloated and they don't affect performance)

A lot of people here are talking about how nginx is "only" useful as some sort of reverse proxy or cache engine or something. We haven't used it for that, although it's on our list of things to try at some point as a lot of people seem to have success with it.

We do use it for serving files over HTTP - primarily video gaming-related files, so they range in size from a few meg up to several gig. It generally performs flawlessly, although sometimes struggles under significant load.

Perhaps they mean websites, because its impossible to tell how many different webservers host a specific website. If I have 1,000,000 IIS servers behind a load balancer hosting a single website, it would be counted as 1, not 1,000,000.