To build uWSGI you need a C compiler (gcc and clang are supported) and the Python binary (to run the uwsgiconfig.py script that will execute the various compilation steps).

As we are building an uWSGI binary with Ruby support we need the Ruby development headers too (the ruby-dev package on Debian-based distributions).

You can build uWSGI manually – all of these are equivalent:

make rack
UWSGI_PROFILE=rack make
make PROFILE=rack
python uwsgiconfig.py --build rack

But if you are lazy, you can download, build and install an uWSGI + Ruby binary in a single shot:

curl http://uwsgi.it/install | bash -s rack /tmp/uwsgi

Or in a more “Ruby-friendly” way:

gem install uwsgi

All of these methods build a “monolithic” uWSGI binary.
The uWSGI project is composed by dozens of plugins. You can choose to build the server core and having a plugin for every feature (that you will load when needed),
or you can build a single binary with all the features you need. This latter kind of build is called ‘monolithic’.

This quickstart assumes a monolithic binary (so you do not need to load plugins).
If you prefer to use your package distributions (instead of building uWSGI from official sources), see below.

Your distribution very probably contains an uWSGI package set. Those uWSGI packages tend to be highly modular (and occasionally highly outdated),
so in addition to the core you need to install the required plugins. Plugins must be loaded in your uWSGI configuration.
In the learning phase we strongly suggest to not use distribution packages to easily follow documentation and tutorials.

Once you feel comfortable with the “uWSGI way” you can choose the best approach for your deployments.

As an example, the tutorial makes use of the “http” and “rack” plugins. If you are using a modular build be sure to load them with the --pluginshttp,rack option.

The supplied HTTP router, is (yes, astoundingly enough) only a router.
You can use it as a load balancer or a proxy, but if you need a full web server (for efficiently serving static files or all of those task a webserver is good at),
you can get rid of the uwsgi HTTP router (remember to change –plugins http,rack to –plugins rack if you are using a modular build) and put your app behind Nginx.

To communicate with Nginx, uWSGI can use various protocol: HTTP, uwsgi, FastCGI, SCGI, etc.

The most efficient one is the uwsgi one. Nginx includes uwsgi protocol support out of the box.

With the previous example you deployed a stack being able to serve a single request at time.

To increase concurrency you need to add more processes.
If you are hoping there is a magic math formula to find the right number of processes to spawn, well... we’re sorry.
You need to experiment and monitor your app to find the right value.
Take in account every single process is a complete copy of your app, so memory usage should be taken in account.

To add more processes just use the –processes <n> option:

uwsgi --socket 127.0.0.1:3031 --rack app.ru --processes 8

will spawn 8 processes.

Ruby 1.9/2.0 introduced an improved threads support and uWSGI supports it via the ‘rbthreads’ plugin. This plugin is automatically
built when you compile the uWSGI+ruby (>=1.9) monolithic binary.

To add more threads:

uwsgi --socket 127.0.0.1:3031 --rack app.ru --rbthreads 4

or threads + processes

uwsgi --socket 127.0.0.1:3031 --rack app.ru --processes --rbthreads 4

There are other (generally more advanced/complex) ways to increase concurrency (for example ‘fibers’), but most of the time
you will end up with a plain old multi-process or multi-thread models. If you are interested, check the complete documentation over at Rack.

uWSGI has literally hundreds of options (but generally you will not use more than a dozens of them). Dealing with them via command line is basically silly, so try to always use config files.

uWSGI supports various standards (XML, INI, JSON, YAML, etc). Moving from one to another is pretty simple.
The same options you can use via command line can be used with config files by simply removing the -- prefix:

uWSGI is “Perlish” in a way, there is nothing we can do to hide that. Most of its choices (starting from “There’s more than one way to do it”) came from the Perl world (and more generally from classical UNIX sysadmin approaches).

Sometimes this approach could lead to unexpected behaviors when applied to other languages/platforms.

One of the “problems” you can face when starting to learn uWSGI is its fork() usage.

By default uWSGI loads your application in the first spawned process and then fork() itself multiple times.

It means your app is loaded a single time and then copied.

While this approach speedups the start of the server, some application could have problems with this technique (especially those initializing db connections
on startup, as the file descriptor of the connection will be inherited in the subprocesses).

If you are unsure about the brutal preforking used by uWSGI, just disable it with the --lazy-apps option. It will force uWSGI to completely load
your app one time per each worker.

That is because basically every modern Rack app exposes itself as a .ru file (generally called config.ru), so there is no need
for multiple options for loading applications (like for example in the Python/WSGI world).

Low memory usage is one of the selling point of the whole uWSGI project.

Unfortunately being aggressive with memory by default could (read well: could) lead to some performance problems.

By default the uWSGI Rack plugin calls the Ruby GC (garbage collector) after every request. If you want to reduce this rate just add the --rb-gc-freq<n> option, where n is the number of requests after the GC is called.

If you plan to make benchmarks of uWSGI (or compare it with other solutions) take in account its use of GC.

Ruby can be a real memory devourer, so we prefer to be aggressive with memory by default instead of making hello-world benchmarkers happy.

The uWSGI offloading subsystem allows you to free your workers as soon as possible when some specific pattern matches and can be delegated
to a pure-c thread. Examples are sending static file from the file system, transferring data from the network to the client and so on.

Offloading is very complex, but its use is transparent to the end user. If you want to try just add --offload-threads<n> where <n> is the number of threads to spawn (1 per CPU is a good value to start with).

When offload threads are enabled, all of the parts that can be optimized will be automatically detected.

You should already be able to go in production with such few concepts, but uWSGI is an enormous project with hundreds of features
and configurations. If you want to be a better sysadmin, continue reading the full docs.