//While this example runs in a single process, that is just to make//it easier to start and stop the example. Each thread has its own//context and conceptually acts as a separate process.//This is the worker task, using a REQ socket to do load-balancing.procedureworker_task( args:Pointer );varcontext: TZMQContext;worker: TZMQSocket;identity,empty,request: Utf8String;begincontext := TZMQContext.create;worker := context.Socket( stReq );s_set_id( worker );//Set a printable identity{$ifdef unix}worker.connect( 'ipc://backend.ipc' );{$else}worker.connect( 'tcp://127.0.0.1:5556' );{$endif}

//Tell broker we're ready for workworker.send( 'READY' );

whiletruedobegin//Read and save all frames until we get an empty frame//In this example there is only 1 but it could be moreworker.recv( identity );worker.recv( empty );Assert( empty ='' );

//This is the main task. It starts the clients and workers, and then//routes requests between the two layers. Workers signal READY when//they start; after that we treat them as ready when they reply with//a response back to a client. The load-balancing data structure is// just a queue of next available workers.varcontext: TZMQContext;frontend,backend: TZMQSocket;i,j,client_nbr,poll_c:Integer;tid:Cardinal;poller: TZMQPoller;

//Here is the main loop for the least-recently-used queue. It has two//sockets; a frontend for clients and a backend for workers. It polls//the backend in all cases, and polls the frontend only when there are//one or more workers ready. This is a neat way to use 0MQ's own queues//to hold messages we're not ready to process yet. When we get a client//reply, we pop the next available worker, and send the request to it,//including the originating client identity. When a worker replies, we//re-queue that worker, and we forward the reply to the original client,//using the reply envelope.