If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

You see, nginx haves no trouble to saturate 1Gbps link on common desktop hardware and would do 10Gbps at decent server hardware, leaving a plenty of resources for other tasks, granted that other I/O like HDDs could keep up with such speeds. So in fact if you dont do things horribly wrong, in real world you would end up being I/O limited anyway so no way to gain more than that without further upgrade of hardware, etc

Not to mention Nginx features very cool cache system which could make a day if you're slashdotted. You see, serving static copy is almost instant. Running PHP (or whatever) script is not. This way, an average cheap hardware could easily withstand slashdot effect.

And yes, it comes with source. So I have it everywhere. Up to my ARM based NAS and MIPS based router, where low resources consumption counts 10 times as much as on x86.

p.s. lack of source implies vendor lock-in and inability to choose OS and CPU arch except those "approved" by ppl who builds blob. And ton of other artificial restrictions. That's just stupid. FAIL.

Almost any web server can saturate its network link if its role is limited to serving statics.
Apache is rarely the first thing to fall down when a site exceed's capacity. When was the last time you went to a slashdot'ed site and exceeded the HTTP timeout? The vast majority of sites return a 500 (when the PHP/.NET/java middleware tips over) or a "MYSQL: Too Many Connections" when the DB tips over.

I'm not saying Apache is perfect and Nginx is horrible, just that you present them a manner neither deserves.

So, everything that does CGI is supported but not as fast as it could be. Read: PHP runs.

How does it compare against Apache, nginx, lighttpd?

I hope we'll find out when Michael does a bench. The official PR says it uses way less RAM and is faster serving static content. Also does C plugins like gwan if speed is needed. Its script support is lacking due to no fcgi right now.

Yes, benchmark it

Would love to see another third-party benchmark. I've been curious about G-WAN for a few months...even thinking about trying it out in production.

If it truly benefits smaller servers it will be a great win for small business (open-source or not). I'm not worried about the future of the platform so much: If it works better now, great! It's so simple to configure that my two-year old can do it! And if it fails down the road...well, I tried. And I'll spend the five minutes copying my non-server specific HTML, CSS, and JS back over to a battle-tested web server.

I currently use Nginx. Some people are calling G-WAN "marketing brainwash" among other things (not trying to single out any particular comment...just chose an example). Unless a trusted benchmark can actually show that it's nothing but brainwash...then it's just people wishing they were better marketers.

No, it didn't tweak it. There's nothing to tweak at G-WAN.
Tested with: ab -c 100 -n 50000 -k
On: Ubuntu 10.10 x64
-k: keep-alive does big difference.
Whitout keep-alive ab will open and close a new connection for every request and you will actually test the kernel's speed at opening and closing connections.