simplely, I get the health of the upstreams using the
ngx_http_upstream_rr_peer_t::fails and the
ngx_http_upstream_rr_peer_t::max_fails. if fails < max-fails, I think the
server died, else I think the server has got up.
2009/6/11 Michael Shadle <mike503 at gmail.com>
> On Wed, Jun 10, 2009 at 2:45 PM, merlin corey<merlincorey at dc949.org>
> wrote:
> > How often do you really expect servers to go up and down? I think you
> > are correct, though, HUP can take a bit of time/resources. My point
> > is, are you really having upstreams die constantly? Seems like you
> > would have much worse problems than what it takes to HUP at that
> > point...
>> In an infrastructure with 10's or 100's of servers, in theory you
> could have one going up and down anytime.
>> Look at Amazon's whitepaper about Dynamo or how Google addresses the
> whole "commodity" issue. Things will go up and down at anytime, and
> you should gracefully handle it. nginx is almost capable of gracefully
> doing it (mid-transfer I don't think it would unless the client
> re-issued the request with a range offset) but with the
> try-next-upstream approach it gracefully handles that already...
>> I'm looking to have a solution in place which can scale and is "set it
> and forget it" - a HUP may be a lot of work, especially if nginx is
> being the frontend for so many connections/servers. I don't know. I
> guess Igor/Maxim would be the most knowledgeable about what exactly a
> HUP will do to all of that...
>>-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://nginx.org/pipermail/nginx/attachments/20090611/db1e9aa7/attachment.html>