I have a few use cases that I'm wondering if ZooKeeper would be suitablefor and would appreciate some feedback.

First use case: Distributing work to a cluster of nodes using consistenthashing to ensure that messages of some type are consistently handled bythe same node. I haven't been able to find any info about ZooKeeper +consistent hashing. Is anyone using it for this? A concern here would behow to redistribute work as nodes come and go from the cluster.

Second use case: Distributed locking. I noticed that there's a recipe forthis on the ZooKeeper wiki. Is anyone doing this? Any issues? One concernwould be how to handle orphaned locks if a node that obtained a lock goesdown.

Third use case: Fault tolerance. If we utilized ZooKeeper to distributemessages to workers, can it be made to handle a node going down byre-distributing the work to another node (perhaps messages that are notack'ed within a timeout are resent)?

i am trying to use the lock recipe for leader election and can share myfindings/sample code in sometime.Regarding your query of orphaned locks - these are sequential ephemeralznodes which are automatically removed from ZKonce the session breaks i believe.

On Thu, Jan 5, 2012 at 9:39 AM, Josh Stone <[EMAIL PROTECTED]> wrote:

> I have a few use cases that I'm wondering if ZooKeeper would be suitable> for and would appreciate some feedback.>> First use case: Distributing work to a cluster of nodes using consistent> hashing to ensure that messages of some type are consistently handled by> the same node. I haven't been able to find any info about ZooKeeper +> consistent hashing. Is anyone using it for this? A concern here would be> how to redistribute work as nodes come and go from the cluster.>> Second use case: Distributed locking. I noticed that there's a recipe for> this on the ZooKeeper wiki. Is anyone doing this? Any issues? One concern> would be how to handle orphaned locks if a node that obtained a lock goes> down.>> Third use case: Fault tolerance. If we utilized ZooKeeper to distribute> messages to workers, can it be made to handle a node going down by> re-distributing the work to another node (perhaps messages that are not> ack'ed within a timeout are resent)?>> Cheers,> Josh>

>Second use case: Distributed lockingThis is one of the most common uses of ZooKeeper. There are many implementations - one included with the ZK distro. Also, there is Curator: https://github.com/Netflix/curator

>First use case: Distributing work to a cluster of nodesThis sounds feasible. If you give more details I and others on this list can help more.

I have a few use cases that I'm wondering if ZooKeeper would be suitablefor and would appreciate some feedback.

First use case: Distributing work to a cluster of nodes using consistenthashing to ensure that messages of some type are consistently handled bythe same node. I haven't been able to find any info about ZooKeeper +consistent hashing. Is anyone using it for this? A concern here would behow to redistribute work as nodes come and go from the cluster.

Second use case: Distributed locking. I noticed that there's a recipe forthis on the ZooKeeper wiki. Is anyone doing this? Any issues? One concernwould be how to handle orphaned locks if a node that obtained a lock goesdown.

Third use case: Fault tolerance. If we utilized ZooKeeper to distributemessages to workers, can it be made to handle a node going down byre-distributing the work to another node (perhaps messages that are notack'ed within a timeout are resent)?

> Hi Josh,>> >Second use case: Distributed locking> This is one of the most common uses of ZooKeeper. There are many> implementations - one included with the ZK distro. Also, there is Curator:> https://github.com/Netflix/curator>> >First use case: Distributing work to a cluster of nodes> This sounds feasible. If you give more details I and others on this list> can help more.>

Sure. I basically want to handle race conditions where two commands thatoperate on the same data are received by my cluster of znodes,concurrently. One approach is to lock on the data that is effected by thecommand (distributed lock). Another approach is make sure that all of thecommands that operate on any set of data are routed to the same node, wherethey can be processed serially using local synchronization. Consistenthashing is an algorithm that can be used to select a node to handle amessage (where the inputs are the key to hash and the number of nodes inthe cluster).

There are various implementations for this floating around. I'm justinteresting to know how this is working for anyone else.

Josh>> -JZ>> ________________________________________> From: Josh Stone [[EMAIL PROTECTED]]> Sent: Wednesday, January 04, 2012 8:09 PM> To: [EMAIL PROTECTED]> Subject: Use cases for ZooKeeper>> I have a few use cases that I'm wondering if ZooKeeper would be suitable> for and would appreciate some feedback.>> First use case: Distributing work to a cluster of nodes using consistent> hashing to ensure that messages of some type are consistently handled by> the same node. I haven't been able to find any info about ZooKeeper +> consistent hashing. Is anyone using it for this? A concern here would be> how to redistribute work as nodes come and go from the cluster.>> Second use case: Distributed locking. I noticed that there's a recipe for> this on the ZooKeeper wiki. Is anyone doing this? Any issues? One concern> would be how to handle orphaned locks if a node that obtained a lock goes> down.>> Third use case: Fault tolerance. If we utilized ZooKeeper to distribute> messages to workers, can it be made to handle a node going down by> re-distributing the work to another node (perhaps messages that are not> ack'ed within a timeout are resent)?>> Cheers,> Josh>

OK - so this is two options for doing the same thing. You use a LeaderElection algorithm to make sure that only one node in the cluster isoperating on a work unit. Curator has an implementation (it's really justa distributed lock with a slightly different API).

-JZ

On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:

>Thanks for the response. Comments below:>>On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman><[EMAIL PROTECTED]>wrote:>>> Hi Josh,>>>> >Second use case: Distributed locking>> This is one of the most common uses of ZooKeeper. There are many>> implementations - one included with the ZK distro. Also, there is>>Curator:>> https://github.com/Netflix/curator>>>> >First use case: Distributing work to a cluster of nodes>> This sounds feasible. If you give more details I and others on this list>> can help more.>>>>Sure. I basically want to handle race conditions where two commands that>operate on the same data are received by my cluster of znodes,>concurrently. One approach is to lock on the data that is effected by the>command (distributed lock). Another approach is make sure that all of the>commands that operate on any set of data are routed to the same node,>where>they can be processed serially using local synchronization. Consistent>hashing is an algorithm that can be used to select a node to handle a>message (where the inputs are the key to hash and the number of nodes in>the cluster).>>There are various implementations for this floating around. I'm just>interesting to know how this is working for anyone else.>>Josh>>>>>> -JZ>>>> ________________________________________>> From: Josh Stone [[EMAIL PROTECTED]]>> Sent: Wednesday, January 04, 2012 8:09 PM>> To: [EMAIL PROTECTED]>> Subject: Use cases for ZooKeeper>>>> I have a few use cases that I'm wondering if ZooKeeper would be suitable>> for and would appreciate some feedback.>>>> First use case: Distributing work to a cluster of nodes using consistent>> hashing to ensure that messages of some type are consistently handled by>> the same node. I haven't been able to find any info about ZooKeeper +>> consistent hashing. Is anyone using it for this? A concern here would be>> how to redistribute work as nodes come and go from the cluster.>>>> Second use case: Distributed locking. I noticed that there's a recipe>>for>> this on the ZooKeeper wiki. Is anyone doing this? Any issues? One>>concern>> would be how to handle orphaned locks if a node that obtained a lock>>goes>> down.>>>> Third use case: Fault tolerance. If we utilized ZooKeeper to distribute>> messages to workers, can it be made to handle a node going down by>> re-distributing the work to another node (perhaps messages that are not>> ack'ed within a timeout are resent)?>>>> Cheers,>> Josh>>

I don't think that consistent hashing is particularly good for that eitherbecause the loss of one node causes the sequential state for lots ofentities to move even among nodes that did not fail.

What I would recommend is a variant of micro-sharding. The key space isdivided into many micro-shards. Then nodes that are alive claim themicro-shards using ephemerals and proceed as Josh described. On loss of anode, the shards that node was handling should be claimed by the remainingnodes. When a new node appears or new work appears, it is helpful todirect nodes to effect a hand-off of traffic.

In my experience, the best way to implement shard balancing is with andexternal master instance much in the style of hbase or katta. Thisexternal master can be exceedingly simple and only needs to wake up onvarious events like loss of a node or change in the set of live shards. Itcan also wake up at intervals if desired to backstop the normalnotifications or to allow small changes for certain kinds of balancing. Typically, this only requires a few hundred lines of code.

This external master can, of course, be run on multiple nodes and whichmaster is in current control can be adjudicated with yet another leaderelection.

You can view this as a package of many leader elections. Or as discretizedconsistent hashing. The distinctions are a bit subtle but are veryimportant. These include,

- there is a clean division of control between the master which determineswho serves what and the nodes that do the serving

- there is no herd effect because the master drives the assignments

- node loss causes the minimum amount of change of assignments since noassignments to surviving nodes are disturbed. This is a major win.

- balancing is pretty good because there are many shards compared to thenumber of nodes.

- the balancing strategy is highly pluggable.

This pattern would make a nice addition to Curator, actually. It comes uprepeatedly in different contexts.

> OK - so this is two options for doing the same thing. You use a Leader> Election algorithm to make sure that only one node in the cluster is> operating on a work unit. Curator has an implementation (it's really just> a distributed lock with a slightly different API).>> -JZ>> On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:>> >Thanks for the response. Comments below:> >> >On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman> ><[EMAIL PROTECTED]>wrote:> >> >> Hi Josh,> >>> >> >Second use case: Distributed locking> >> This is one of the most common uses of ZooKeeper. There are many> >> implementations - one included with the ZK distro. Also, there is> >>Curator:> >> https://github.com/Netflix/curator> >>> >> >First use case: Distributing work to a cluster of nodes> >> This sounds feasible. If you give more details I and others on this list> >> can help more.> >>> >> >Sure. I basically want to handle race conditions where two commands that> >operate on the same data are received by my cluster of znodes,> >concurrently. One approach is to lock on the data that is effected by the> >command (distributed lock). Another approach is make sure that all of the> >commands that operate on any set of data are routed to the same node,> >where> >they can be processed serially using local synchronization. Consistent> >hashing is an algorithm that can be used to select a node to handle a> >message (where the inputs are the key to hash and the number of nodes in> >the cluster).> >> >There are various implementations for this floating around. I'm just> >interesting to know how this is working for anyone else.> >> >Josh> >> >> >>> >> -JZ> >>> >> ________________________________________> >> From: Josh Stone [[EMAIL PROTECTED]]> >> Sent: Wednesday, January 04, 2012 8:09 PM

Third use case: Fault tolerance. If we utilized ZooKeeper to distributemessages to workers, can it be made to handle a node going down byre-distributing the work to another node (perhaps messages that are notack'ed within a timeout are resent)?

>>Third use-case is done by kafka(ZK based consumer) wherein new consumersgetting added/removed from groupnotifies existing consumers(they release all their work) and redistributethe work among themselves.

We're thinking along the same lines. Specifically, I was thinking of usinga hash ring to minimize disruptions to the key space when nodes come andgo. Either that, or micro-sharding would be nice and I'm curious how thishas went with anyone else using ZooKeeper? I should mention, this isbasically an alternative to distributed locks. Both achieve the same thing- protecting against race conditions.

> Jordan, I don't think that leader election does what Josh wants.>> I don't think that consistent hashing is particularly good for that either> because the loss of one node causes the sequential state for lots of> entities to move even among nodes that did not fail.>> What I would recommend is a variant of micro-sharding. The key space is> divided into many micro-shards. Then nodes that are alive claim the> micro-shards using ephemerals and proceed as Josh described. On loss of a> node, the shards that node was handling should be claimed by the remaining> nodes. When a new node appears or new work appears, it is helpful to> direct nodes to effect a hand-off of traffic.>> In my experience, the best way to implement shard balancing is with and> external master instance much in the style of hbase or katta. This> external master can be exceedingly simple and only needs to wake up on> various events like loss of a node or change in the set of live shards. It> can also wake up at intervals if desired to backstop the normal> notifications or to allow small changes for certain kinds of balancing.> Typically, this only requires a few hundred lines of code.>> This external master can, of course, be run on multiple nodes and which> master is in current control can be adjudicated with yet another leader> election.>> You can view this as a package of many leader elections. Or as discretized> consistent hashing. The distinctions are a bit subtle but are very> important. These include,>> - there is a clean division of control between the master which determines> who serves what and the nodes that do the serving>> - there is no herd effect because the master drives the assignments>> - node loss causes the minimum amount of change of assignments since no> assignments to surviving nodes are disturbed. This is a major win.>> - balancing is pretty good because there are many shards compared to the> number of nodes.>> - the balancing strategy is highly pluggable.>> This pattern would make a nice addition to Curator, actually. It comes up> repeatedly in different contexts.>> On Thu, Jan 5, 2012 at 12:11 AM, Jordan Zimmerman <[EMAIL PROTECTED]> >wrote:>> > OK - so this is two options for doing the same thing. You use a Leader> > Election algorithm to make sure that only one node in the cluster is> > operating on a work unit. Curator has an implementation (it's really just> > a distributed lock with a slightly different API).> >> > -JZ> >> > On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:> >> > >Thanks for the response. Comments below:> > >> > >On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman> > ><[EMAIL PROTECTED]>wrote:> > >> > >> Hi Josh,> > >>> > >> >Second use case: Distributed locking> > >> This is one of the most common uses of ZooKeeper. There are many> > >> implementations - one included with the ZK distro. Also, there is> > >>Curator:> > >> https://github.com/Netflix/curator> > >>> > >> >First use case: Distributing work to a cluster of nodes> > >> This sounds feasible. If you give more details I and others on this> list> > >> can help more.> > >>> > >> > >Sure. I basically want to handle race conditions where two commands that> > >operate on the same data are received by my cluster of znodes,> > >concurrently. One approach is to lock on the data that is effected by> the> > >command (distributed lock). Another approach is make sure that all of

>Third use case: Fault tolerance. If we utilized ZooKeeper to distribute>messages to workers, can it be made to handle a node going down by>re-distributing the work to another node (perhaps messages that are not>ack'ed within a timeout are resent)?

> FYI - Curator has a resilient message Queue:> https://github.com/Netflix/curator/wiki/Distributed-Queue>> On 1/5/12 5:00 AM, "Inder Pall" <[EMAIL PROTECTED]> wrote:>> >Third use case: Fault tolerance. If we utilized ZooKeeper to distribute> >messages to workers, can it be made to handle a node going down by> >re-distributing the work to another node (perhaps messages that are not> >ack'ed within a timeout are resent)?>>

>Yes, something like that with lock safety would satisfy my third use case.>>Some questions: Is the distributed queue effectively located by a single>z-node? What happens when that node goes down? Will a node going down>still>clear any distributed locks?>>Josh>>On Thu, Jan 5, 2012 at 9:41 AM, Jordan Zimmerman><[EMAIL PROTECTED]>wrote:>>> FYI - Curator has a resilient message Queue:>> https://github.com/Netflix/curator/wiki/Distributed-Queue>>>> On 1/5/12 5:00 AM, "Inder Pall" <[EMAIL PROTECTED]> wrote:>>>> >Third use case: Fault tolerance. If we utilized ZooKeeper to distribute>> >messages to workers, can it be made to handle a node going down by>> >re-distributing the work to another node (perhaps messages that are not>> >ack'ed within a timeout are resent)?>>>>

> Curator's queue handles a node going down (when you use setLockPath()).> Curator will hold a lock for each message that is being processed. You can> see the implementation in the method processWithLockSafety() here:> https://github.com/Netflix/curator/blob/master/curator-recipes/src/main/jav> a/com/netflix/curator/framework/recipes/queue/DistributedQueue.java>> >Will a node going down still clear any distributed locks?> Yes.>>> -JZ>> On 1/5/12 9:56 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:>> >Yes, something like that with lock safety would satisfy my third use case.> >> >Some questions: Is the distributed queue effectively located by a single> >z-node? What happens when that node goes down? Will a node going down> >still> >clear any distributed locks?> >> >Josh> >> >On Thu, Jan 5, 2012 at 9:41 AM, Jordan Zimmerman> ><[EMAIL PROTECTED]>wrote:> >> >> FYI - Curator has a resilient message Queue:> >> https://github.com/Netflix/curator/wiki/Distributed-Queue> >>> >> On 1/5/12 5:00 AM, "Inder Pall" <[EMAIL PROTECTED]> wrote:> >>> >> >Third use case: Fault tolerance. If we utilized ZooKeeper to distribute> >> >messages to workers, can it be made to handle a node going down by> >> >re-distributing the work to another node (perhaps messages that are not> >> >ack'ed within a timeout are resent)?> >>> >>>>

They're stored in ZooKeeper, so both. ZooKeeper backs everything to diskbut keeps the entire DB in memory for performance.

-JZ

On 1/5/12 10:54 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:

>Are the distributed queue and locks written to disk or can they be held in>memory?>>josh>>On Thu, Jan 5, 2012 at 10:02 AM, Jordan Zimmerman><[EMAIL PROTECTED]>wrote:>>> Curator's queue handles a node going down (when you use setLockPath()).>> Curator will hold a lock for each message that is being processed. You>>can>> see the implementation in the method processWithLockSafety() here:>> >>https://github.com/Netflix/curator/blob/master/curator-recipes/src/main/j>>av>> a/com/netflix/curator/framework/recipes/queue/DistributedQueue.java>>>> >Will a node going down still clear any distributed locks?>> Yes.>>>>>> -JZ>>>> On 1/5/12 9:56 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:>>>> >Yes, something like that with lock safety would satisfy my third use>>case.>> >>> >Some questions: Is the distributed queue effectively located by a>>single>> >z-node? What happens when that node goes down? Will a node going down>> >still>> >clear any distributed locks?>> >>> >Josh>> >>> >On Thu, Jan 5, 2012 at 9:41 AM, Jordan Zimmerman>> ><[EMAIL PROTECTED]>wrote:>> >>> >> FYI - Curator has a resilient message Queue:>> >> https://github.com/Netflix/curator/wiki/Distributed-Queue>> >>>> >> On 1/5/12 5:00 AM, "Inder Pall" <[EMAIL PROTECTED]> wrote:>> >>>> >> >Third use case: Fault tolerance. If we utilized ZooKeeper to>>distribute>> >> >messages to workers, can it be made to handle a node going down by>> >> >re-distributing the work to another node (perhaps messages that are>>not>> >> >ack'ed within a timeout are resent)?>> >>>> >>>>>>

The micro-sharding of traffic approach gives you very high throughput(easily high enough, for instance, to handle all of twitter's traffic).

The con for micro-sharding is that there isn't any reliable delivery bakedin so the sender basically has to wait for an ACK and try again if nodelivery happens. If you depend on a single sink for each key range shard,then you will not be able to send until a new recipient for that shard isdesignated. This could take session-expiration-time plus epsilon so youhave to be able to handle that much back pressure in the message queue.

An alternative would be to designate multiple sinks for each key rangeshard. That loses some of the coherency that I think you were after, butif you strictly prioritize you can convert the cost of node failure fromsignificant back-pressure to slightly degraded coherency. For a clean nodefailure, you would get fast cutover, but with a flapping node you might getsome strange effects. If a node started losing messages, you would alsoget some bad effects there where the backups would get random bits oftraffic rather than all of the traffic.

Ted - are you interested in writing this on top of Curator? If not, I'llgive it a whack.

-JZ

On 1/5/12 12:50 AM, "Ted Dunning" <[EMAIL PROTECTED]> wrote:

>Jordan, I don't think that leader election does what Josh wants.>>I don't think that consistent hashing is particularly good for that either>because the loss of one node causes the sequential state for lots of>entities to move even among nodes that did not fail.>>What I would recommend is a variant of micro-sharding. The key space is>divided into many micro-shards. Then nodes that are alive claim the>micro-shards using ephemerals and proceed as Josh described. On loss of a>node, the shards that node was handling should be claimed by the remaining>nodes. When a new node appears or new work appears, it is helpful to>direct nodes to effect a hand-off of traffic.>>In my experience, the best way to implement shard balancing is with and>external master instance much in the style of hbase or katta. This>external master can be exceedingly simple and only needs to wake up on>various events like loss of a node or change in the set of live shards.>It>can also wake up at intervals if desired to backstop the normal>notifications or to allow small changes for certain kinds of balancing.> Typically, this only requires a few hundred lines of code.>>This external master can, of course, be run on multiple nodes and which>master is in current control can be adjudicated with yet another leader>election.>>You can view this as a package of many leader elections. Or as>discretized>consistent hashing. The distinctions are a bit subtle but are very>important. These include,>>- there is a clean division of control between the master which determines>who serves what and the nodes that do the serving>>- there is no herd effect because the master drives the assignments>>- node loss causes the minimum amount of change of assignments since no>assignments to surviving nodes are disturbed. This is a major win.>>- balancing is pretty good because there are many shards compared to the>number of nodes.>>- the balancing strategy is highly pluggable.>>This pattern would make a nice addition to Curator, actually. It comes up>repeatedly in different contexts.>>On Thu, Jan 5, 2012 at 12:11 AM, Jordan Zimmerman><[EMAIL PROTECTED]>wrote:>>> OK - so this is two options for doing the same thing. You use a Leader>> Election algorithm to make sure that only one node in the cluster is>> operating on a work unit. Curator has an implementation (it's really>>just>> a distributed lock with a slightly different API).>>>> -JZ>>>> On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:>>>> >Thanks for the response. Comments below:>> >>> >On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman>> ><[EMAIL PROTECTED]>wrote:>> >>> >> Hi Josh,>> >>>> >> >Second use case: Distributed locking>> >> This is one of the most common uses of ZooKeeper. There are many>> >> implementations - one included with the ZK distro. Also, there is>> >>Curator:>> >> https://github.com/Netflix/curator>> >>>> >> >First use case: Distributing work to a cluster of nodes>> >> This sounds feasible. If you give more details I and others on this>>list>> >> can help more.>> >>>> >>> >Sure. I basically want to handle race conditions where two commands>>that>> >operate on the same data are received by my cluster of znodes,>> >concurrently. One approach is to lock on the data that is effected by>>the>> >command (distributed lock). Another approach is make sure that all of>>the>> >commands that operate on any set of data are routed to the same node,>> >where>> >they can be processed serially using local synchronization. Consistent>> >hashing is an algorithm that can be used to select a node to handle a>> >message (where the inputs are the key to hash and the number of nodes>>in>> >the cluster).>> >>> >There are various implementations for this floating around. I'm just

> Ted - are you interested in writing this on top of Curator? If not, I'll> give it a whack.>> -JZ>> On 1/5/12 12:50 AM, "Ted Dunning" <[EMAIL PROTECTED]> wrote:>> >Jordan, I don't think that leader election does what Josh wants.> >> >I don't think that consistent hashing is particularly good for that either> >because the loss of one node causes the sequential state for lots of> >entities to move even among nodes that did not fail.> >> >What I would recommend is a variant of micro-sharding. The key space is> >divided into many micro-shards. Then nodes that are alive claim the> >micro-shards using ephemerals and proceed as Josh described. On loss of a> >node, the shards that node was handling should be claimed by the remaining> >nodes. When a new node appears or new work appears, it is helpful to> >direct nodes to effect a hand-off of traffic.> >> >In my experience, the best way to implement shard balancing is with and> >external master instance much in the style of hbase or katta. This> >external master can be exceedingly simple and only needs to wake up on> >various events like loss of a node or change in the set of live shards.> >It> >can also wake up at intervals if desired to backstop the normal> >notifications or to allow small changes for certain kinds of balancing.> > Typically, this only requires a few hundred lines of code.> >> >This external master can, of course, be run on multiple nodes and which> >master is in current control can be adjudicated with yet another leader> >election.> >> >You can view this as a package of many leader elections. Or as> >discretized> >consistent hashing. The distinctions are a bit subtle but are very> >important. These include,> >> >- there is a clean division of control between the master which determines> >who serves what and the nodes that do the serving> >> >- there is no herd effect because the master drives the assignments> >> >- node loss causes the minimum amount of change of assignments since no> >assignments to surviving nodes are disturbed. This is a major win.> >> >- balancing is pretty good because there are many shards compared to the> >number of nodes.> >> >- the balancing strategy is highly pluggable.> >> >This pattern would make a nice addition to Curator, actually. It comes up> >repeatedly in different contexts.> >> >On Thu, Jan 5, 2012 at 12:11 AM, Jordan Zimmerman> ><[EMAIL PROTECTED]>wrote:> >> >> OK - so this is two options for doing the same thing. You use a Leader> >> Election algorithm to make sure that only one node in the cluster is> >> operating on a work unit. Curator has an implementation (it's really> >>just> >> a distributed lock with a slightly different API).> >>> >> -JZ> >>> >> On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:> >>> >> >Thanks for the response. Comments below:> >> >> >> >On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman> >> ><[EMAIL PROTECTED]>wrote:> >> >> >> >> Hi Josh,> >> >>> >> >> >Second use case: Distributed locking> >> >> This is one of the most common uses of ZooKeeper. There are many> >> >> implementations - one included with the ZK distro. Also, there is> >> >>Curator:> >> >> https://github.com/Netflix/curator> >> >>> >> >> >First use case: Distributing work to a cluster of nodes> >> >> This sounds feasible. If you give more details I and others on this> >>list> >> >> can help more.> >> >>> >> >> >> >Sure. I basically want to handle race conditions where two commands> >>that> >> >operate on the same data are received by my cluster of znodes,> >> >concurrently. One approach is to lock on the data that is effected by> >>the> >> >command (distributed lock). Another approach is make sure that all of

>I think I have a bit of it written already.>>It doesn't use Curator and I think you could simplify it substantially if>you were to use it. Would that help?>>On Thu, Jan 12, 2012 at 12:52 PM, Jordan Zimmerman><[EMAIL PROTECTED]>wrote:>>> Ted - are you interested in writing this on top of Curator? If not, I'll>> give it a whack.>>>> -JZ>>>> On 1/5/12 12:50 AM, "Ted Dunning" <[EMAIL PROTECTED]> wrote:>>>> >Jordan, I don't think that leader election does what Josh wants.>> >>> >I don't think that consistent hashing is particularly good for that>>either>> >because the loss of one node causes the sequential state for lots of>> >entities to move even among nodes that did not fail.>> >>> >What I would recommend is a variant of micro-sharding. The key space>>is>> >divided into many micro-shards. Then nodes that are alive claim the>> >micro-shards using ephemerals and proceed as Josh described. On loss>>of a>> >node, the shards that node was handling should be claimed by the>>remaining>> >nodes. When a new node appears or new work appears, it is helpful to>> >direct nodes to effect a hand-off of traffic.>> >>> >In my experience, the best way to implement shard balancing is with and>> >external master instance much in the style of hbase or katta. This>> >external master can be exceedingly simple and only needs to wake up on>> >various events like loss of a node or change in the set of live shards.>> >It>> >can also wake up at intervals if desired to backstop the normal>> >notifications or to allow small changes for certain kinds of balancing.>> > Typically, this only requires a few hundred lines of code.>> >>> >This external master can, of course, be run on multiple nodes and which>> >master is in current control can be adjudicated with yet another leader>> >election.>> >>> >You can view this as a package of many leader elections. Or as>> >discretized>> >consistent hashing. The distinctions are a bit subtle but are very>> >important. These include,>> >>> >- there is a clean division of control between the master which>>determines>> >who serves what and the nodes that do the serving>> >>> >- there is no herd effect because the master drives the assignments>> >>> >- node loss causes the minimum amount of change of assignments since no>> >assignments to surviving nodes are disturbed. This is a major win.>> >>> >- balancing is pretty good because there are many shards compared to>>the>> >number of nodes.>> >>> >- the balancing strategy is highly pluggable.>> >>> >This pattern would make a nice addition to Curator, actually. It>>comes up>> >repeatedly in different contexts.>> >>> >On Thu, Jan 5, 2012 at 12:11 AM, Jordan Zimmerman>> ><[EMAIL PROTECTED]>wrote:>> >>> >> OK - so this is two options for doing the same thing. You use a>>Leader>> >> Election algorithm to make sure that only one node in the cluster is>> >> operating on a work unit. Curator has an implementation (it's really>> >>just>> >> a distributed lock with a slightly different API).>> >>>> >> -JZ>> >>>> >> On 1/5/12 12:04 AM, "Josh Stone" <[EMAIL PROTECTED]> wrote:>> >>>> >> >Thanks for the response. Comments below:>> >> >>> >> >On Wed, Jan 4, 2012 at 10:46 PM, Jordan Zimmerman>> >> ><[EMAIL PROTECTED]>wrote:>> >> >>> >> >> Hi Josh,>> >> >>>> >> >> >Second use case: Distributed locking>> >> >> This is one of the most common uses of ZooKeeper. There are many>> >> >> implementations - one included with the ZK distro. Also, there is>> >> >>Curator:>> >> >> https://github.com/Netflix/curator>> >> >>>> >> >> >First use case: Distributing work to a cluster of nodes>> >> >> This sounds feasible. If you give more details I and others on>>this>> >>list>> >> >> can help more.>> >> >>>> >> >>> >> >Sure. I basically want to handle race conditions where two commands

NEW: Monitor These Apps!

All projects made searchable here are trademarks of the Apache Software Foundation.
Service operated by Sematext