Responses below...
On 4/4/06 5:01 PM, "Stephen Waits" <steve at waits.net> wrote:
> What I'm building is essentially a filter. On certain URIs, Mongrel
> will handle the request completely. On other URIs, I'd like to pass
> that responsibility off to an external CGI process (be it Rails's
> dispatch.cgi, or otherwise). How do I go about that?
>
Pretty simple. I went over the config/mongrel.rb file on IRC but I'll
explain it again real quick for people reading the list.
Mongrel uses the Configurator to simplify wiring up Mongrel and handlers:
http://mongrel.rubyforge.org/rdoc/classes/Mongrel/Configurator.html
There's also the Mongrel::Rails::RailsConfigurator which is a subclass and
adds a few little extras.
All you gotta do is peak into the mongrel_rails script and see how we use
the configurator to hook up rails. It's very readable and works great.
In your case, you'd basically have your own script load the HttpHandlers you
need and then use a configurator to create listeners with handlers attached
to them.
The other option is to do it all by hand rather than use the Configurator.
If you're looking at doing it this way than at least glance at the
Configurator's code so you can see kind of the standard ways all the stuff
is wired together.
Finally there's a way to make handlers act as GemPlugins so they're easier
to load and distribute, but I think you aren't interested in that. If you
are then take a look at lib/mongrel/debug.rb to see all the filtering
handlers there.
For your particular needs you'd just write a HttpHandler that takes a look
at the request.params["PATH_INFO"] and if it matches the CGI's path then
you'll start the CGI. The nice thing is that I've matched the CGI 1.2
specification as closely as possible so the contents of request.params
should be directly applicable to your CGI input.
> Second, I'm finding that in several of my handlers I'd like to reuse a
> response. However, I couldn't instance a response because it requires a
> socket. Therefore, I assume it's not meant to work this way. Possible?
>
The handlers can be registered to the same URI in order and they'll be run
in that order. Take a look at the options to Configurator.uri or
HttpServer.register in 0.3.12.2 to see the option to put a handler in front
rather than at the end.
Mongrel basically runs each handler in order until either all the handlers
have ran or one of the handlers "finalizes" the response object. This lets
handlers short-circuit the request processing (good for authentication).
Each handler then just gets the same request/response objects in order so
they can modify them as needed. Handlers can *even* reset the HttpResponse
objects, access the headers, status, and bodies and modify them as needed.
Now, when you say "reuse" the response, if you mean keep it around for a
future request then you'll have problems. The handler chain is particularly
tuned so that when the request is processed the response object is basically
dead. You could keep it around, but the socket and writing anything to it
would not work. It'd basically be read-only.
But since you can access all the contents of the response object you could
make a special dup method to do what you want.
> Finally, am I right in thinking that you can run a multi-threaded
> server, albeit with Ruby's cruddy threads, by doing something like this:
>> h = Mongrel::HttpServer.new('0.0.0.0', '3000')
> [h.run, h.run, h.run].each { |t| t.join }
>
Nah, that's not necessary. Mongrel is already heavily threaded. You just
do one run and it'll spawn off as many threads as it can to handle any
requests it gets. As a matter of fact, I'm curious why you thought it
*wasn't* running with threads since that's a very important thing to know.
Did you read something? In the Rails stuff Mongrel locks Rails prior to
running the Rails Dispatcher, but otherwise the rest of Mongrel is all
threads all the time. Was that it?
Now, if you wanted to live on the very edge and you're on Unix you can try
the fork trick to get real mulit-processing. Fork trick is basically where
you create the HttpServer, get the socket, and then right after you've
established this listening socket you use fork to create multiple listening
processes off one socket.
How it works is when the OS comes in to get something to handle a client
request, it has to pick a process. There's usually some locking and
juggling, but the OS will typically pick the first process that can accept
the client. Since many processes listen to this one socket you get the best
of both worlds: fast in process threads through Ruby and mutli-processing
from the OS.
Now, before you go running into utopia shooting your leg off there's some
warnings:
1) Don't start 100 processes thinking this will make things faster. You're
using an N:M thread model so you only need a few processes per CPU to get
the best throughput. Usually I do about 4-8 per CPU.
2) It's not that reliable with Ruby. The fork seems to mess with a lot of
the Ruby internals making the locking you'd normally get from a nice C
program not work the same.
3) You can't restart these processes easily without also writing a bunch of
monitoring and health check code. As you go down this rabbit hole you'll
start to realize that you're just reinventing all the machinery that a good
proxy server, mongrels on each port, and your OS's normal process management
already does.
Otherwise, give it a shot and see what you get. If you can iron out the
stability problems then I'll consider adding it as an option to HttpServer
to simplify things.
Zed