I have a very similar system, but I "broadcast" messages sent by one
client to all the other clients in the same group, there can be
several thousand clients in a group.
I time the latency from one client to all the other clients by putting
a timestamp (using now()) in the message, then all the receiving
clients calculate the delta between that timestamp and when they
receive the message, this works because all the clients are running in
the same node.
This only times the latency between a client sending a message and a
client getting the message, I am wondering if you are not including in
your timing the process death and processing of the DOWN message as
well, which could be a lot longer than sending/receiving the message.
I suspect you are only interested in the latency between the bot
sending the message and the 20k clients getting that message, not
process exit time and associated garbage cleanup etc.
In your case if the bots can communicate with the bot manager and all
are running on the same node, maybe you could send the timestamp of
when each bot gets the message to the bot manager, and the timestamp
when the message was sent, and the bot manager would then collate all
the timestamps once all are received and calculate average latency and
min/max latency, this wouod time actual message latency and not
associated setup/teardown of processes.
On Jul 10, 12:09 pm, Joel Reymont <> wrote:
> Any suggestions on how to measure delivery time from server to client?
>> My "bot manager" runs on one node and monitors the 20k bots running on
> others. It starts the timer once it knows that all recepients are
> ready and waits to receive 20k 'DOWN' messages to report total
> broadcast time.
>> Thanks, Joel
>> ---
> Mac hacker with a performance benthttp://www.linkedin.com/in/joelreymont>> ________________________________________________________________
> erlang-questions mailing list. Seehttp://www.erlang.org/faq.html> erlang-questions (at) erlang.org