On 12/21/05, Joel Reymont <joelr1 at gmail.com> wrote:
> I don't want any kind of locking, true. I need all bots to respond in
> time otherwise the poker server will sit them out. Eliminating the
> timeout on pickling does not eliminate the timeout overall, it just
> passes it to a different place.
>> One thread will go through serialization quickly but it will be too
> late by the time it sends a response to the server since it waited a
> few seconds for the chance to have a go at serialization. I'm trying
> to build 6.5 right now (having trouble, you might have guessed ;))
> and will positively study the scheduler. I do not believe thread
> priorities are supported in GHC, though.
>> I had a thought about using continuations but then I would also have
> to do selects on a few thousand file descriptors myself. Then I would
> have to decide which continuation to run based on priorities. I might
> as well patch the GHC scheduler to do what I need. Alternatively, I
> can just throw in the towel and rewrite the app in Erlang.
>> I haven't made a firm decision yet, I think I will make my records
> storable first as you can't get "any closer to the metal". If that
> does not work then I will just give up. I do have other apps to write
> in Haskell but it will be a pity if this one does not work out.
>
I've only skimmed through this, so I may miss the point, but it sounds
like a latency vs bandwidth discussion.
Let's say you push through 5000 requests in one second (i.e you start
5000 processes and then exactly one second later all 5000 complete
simultaneously). Now 50000 in ten seconds is actually the exact same
throughput, but if your timeout is three seconds, then you'll get
problems.
So your problem is that you only do a tiny bit of work for each
processess over and over, think of the scheduler just looping through
the processess giving it a tiny slice of time, over and over and over.
It may take ten seconds for any individual process to complete, but
the full throughput is still the same.
When you increase the number of processes you won't see the additional
processes timeout, but ALL processess (or at least many of them).
My suggestion is this: Find out how many processes can be serviced at
one time without getting timed out (i.e. find a good compromise
between latency and bandwidth), then wrap up the computaitons in a
semaphore containing exactly that many resources.
I think you probably want this amount to be somwehere around the
number of actual CPU cores you have. Having a process' computation
wait for 99% of the timeout out to start and then complete it in the
final 1% is no worse then having it slowly compute its result for the
duration of the timeout.
The difference is that if you run out of CPU juice, only some of the
processess get hurt (they timout before they start), instead of all of
them (the time it takes to compute each of them is more than the
timeout because the CPU is spread too thin).
/S
--
Sebastian Sylvan
+46(0)736-818655
UIN: 44640862