In general, all releases (development or otherwise) are quite stable.
This site runs the latest development version at all times.
Many NGINX users tend to represent an “early adopter” crowd, so a large segment is using the bleeding-edge version at any given point.
The major and most important point about the development version is that it can occasionally break APIs for third-party modules (APIs are sometimes changed in the development branch).
Other than that, it gets all non-emergency bugfixes first.

That said, if stability is crucial it is best to briefly hold off on deployment after a development release; critical bugs tend to show up within the first couple days (which often results in another release immediately afterwards).
If no new release shows up in two or three days, then it’s likely no one has found any critical bugs.
In the event that you discover a bug, capture a debug log and submit a descriptive bug report!

How do I generate an .htpasswd file without having Apache tools installed?¶

In Linux (and other Posix): given users John, Mary, Jane, and Jim with passwords V3Ry, SEcRe7, V3RySEcRe7, and SEcRe7PwD, in order to generate a password file named .htpasswd, you would issue:

Start by investigating possible problem causes. Review Debugging and carefully look LINE BY LINE through the error log.

If you can’t determine the cause of the problem through testing, experimentation, searches on the ‘net, etc., then gather all relevant details and clearly explain your problem on IRC or in a note to the mailing list.
(If you are new to interacting with FOSS support communities, please read: How To Ask Questions The Smart Way.)

What most people mean by “similar” in this context is: “lightweight” or “not Apache”.
You can find many comparisons using Google, but most web servers fall into two categories: process-based (forking or threaded) and asynchronous.
NGINX and Lighttpd are probably the two best-known asynchronous servers and Apache is undoubtedly the best-known process-based server.
Cherokee is a lesser-known process-based server (but with very high performance).

The main advantage of the asynchronous approach is scalability.
In a process-based server, each simultaneous connection requires a thread which incurs significant overhead.
An asynchronous server, on the other hand, is event-driven and handles requests in a single (or at least, very few) threads.

While process-based servers can often perform on par with asynchronous servers under light loads, under heavier loads they usually consume far too much RAM, which significantly degrades performance.
Also, their performance degrades much faster on less powerful hardware or in a resource-restricted environment such as a VPS.

Pulling numbers from thin air for illustrative purposes: serving 10,000 simultaneous connections would probably only cause NGINX to use a few megabytes of RAM, while Apache would likely consume hundreds of megabytes (if it could do it at all).

mod_suexec is a solution to a problem that NGINX does not have.
When running servers such as Apache, each instance consumes a significant amount of RAM, so it becomes important to only have a monolithic instance that handles all one’s needs.
With NGINX, the memory and CPU utilization is so low that running dozens of instances of it is not an issue.

A comparable NGINX setup to Apache + mod_suexec is to run a separate instance of NGINX as the CGI script user (that is, the user that would have been specified as suexec user under Apache), and then proxy to that from the main NGINX instance.

Alternatively, PHP could simply be executed through FastCGI, which itself would be running under a CGI script user account.

Note

mod_php (the module suexec is normally utilized to defend against) does not exist with NGINX.

@location is a named location. Named locations preserve $uri as it was before entering said location.
They were introduced in 0.6.6 and can be reached only via error_page, post_action (since 0.6.26) and try_files (since 0.7.27, backported to 0.6.36).

For which general use cases is NGINX more appropriate than Squid? (And vice versa...)¶

NGINX is generally deployed as a reverse proxy, not as a caching proxy (like Squid).
The key advantage with NGINX is its nominal RAM and CPU usage under heavy load.
Squid is best applied to cache dynamic content for applications that cannot do it themselves.

The example is for unauthenticated e-mail as you can see, but if you need authentication just check out the ngx_mail_core_module information on how to achieve it.
Postfix by default doesn’t support XCLIENT, so it got turned off in the example as well.

Next, you need to configure the authentication back end. If you just need to have some sort of pass-through mode towards a single address, you can do so with the following code:

What algorithm does NGINX use to load balance? Can it balance based on connection load?¶

Many users have requested that NGINX implement a feature in the load balancer to limit the number of requests per back end (usually to one). While support for this is planned, it’s worth mentioning that demand for this feature is rooted in misbehavior on the part of the application being proxied ‘’to’’ (Ruby on Rails seems to be one example). This is not an NGINX issue. In an ideal world, this particular fix request would be directed toward the back-end application and its (in)ability to handle simultaneous requests.