If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register or Login
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.

Hybrid View

Latency-critical networking library

I am looking for a C++ networking library that emphasizes minimum latency at the expense of all else. I like the design of ZeroMQ, but I'm concerned that it doesn't provide an unreliable transport, just TCP and PGM. I don't need reliability so much as minimum time overhead. Preferably, I would like a library that can do cool tricks like FEC codes or packet duplication to reduce the drop rate without waiting for any retransmissions.

It's possible the TCP overhead won't be a problem after the initial connection delay. I don't have the ability to test-and-measure just yet to find out.

Re: Latency-critical networking library

There is no crystal ball here.

You can't have both, if low latency is your key (i.e. continuous streaming) then you have to account for packet loss, packets arriving out of order and packets being split or merged during transmission.

For every reliability issue you want to solve (and to how much of a guarantee you want it solved), you will sacrifice in latency and throughput.

So it all boils down to you knowing how much latency you can afford to have, and how much guarantees you want to get.

If you're Lucky there's a big overlapping range between the two and a plethora of methods/protocols to choose from.
if you're unlucky your requirements don't allow for anything, and you'll have to make a painful decision to either increase the latency to higher than you wanted, or get less guarantees than you wanted.

The alternative of course is setting requirements on the network hardware, but that'll only ever work when you/the customer has control over everything in between both ends. If the internet is involved, your guarantees are zero.

Re: Latency-critical networking library

if you want to physically reduce packet loss, then that's hardware requirements.

if you want to account for the occurence of packet loss, then that's all about how your protocol handles this, which means you'll either need a retransmission mechanism (which lowers throughput), or your code simply needs to somehow cope with "holes" in the data. So this boils down to: "do you want to guarantee complete transmission" or "do you want to guarantee throughput".