16. Kleinrock, L. 1979. Power and deterministic rules
of thumb for probabilistic problems in computer
communications. In Proceedings of the International
Conference on Communications (1979), 43. 1. 1-43. 1. 10.

Soheil Hassas Yeganeh, and Van Jacobson are members
of Google’s make-tcp-fast project, whose goal is to evolve
Internet transport via fundamental research and open
source software. Project contributions include TFO (TCP
Fast Open), TLP ( Tail Loss Probe), RACK loss recovery,
fq/pacing, and a large fraction of the git commits to the
Linux kernel TCP code for the past five years. They can be
contacted at https://googlegroups.com/d/forum/bbr-dev.

Copyright held by owners/authors.

Mobile CellularAdaptive Bandwidth

Cellular systems adapt per-subscrib-er bandwidth based partly on a demand estimate that uses the queue of
packets destined for the subscriber.
Early versions of BBR were tuned to
create very small queues, resulting
in connections getting stuck at low
rates. Raising the peak ProbeBW
pacing_gain to create bigger queues
resulted in fewer stuck connections,
indicating it is possible to be too nice
to some networks. With the current
1. 25 × BtlBw peak gain, no degradation is apparent compared with CUBIC on any network.

Delayed and stretched aks.
Cellular, Wi-Fi, and cable broadband
networks often delay and aggregate
ACKs. 1 When inflight is limited to
one BDP, this results in throughput-reducing stalls. Raising ProbeBW’s
cwnd_gain to two allowed BBR to
continue sending smoothly at the estimated delivery rate, even when ACKs
are delayed by up to one RTT. This
largely avoids stalls.

Token-bucket policers. BBR’s initial YouTube deployment revealed
that most of the world’s ISPs mangle
traffic with token-bucket policers. 7
The bucket is typically full at connection startup so BBR learns the underlying network’s BtlBw, but once
the bucket empties, all packets sent
faster than the (much lower than
BtlBw) bucket fill rate are dropped.
BBR eventually learns this new delivery rate, but the ProbeBW gain cycle
results in continuous moderate losses. To minimize the upstream bandwidth waste and application latency
increase from these losses, we added
policer detection and an explicit policer model to BBR. We are also actively researching better ways to mitigate the policer damage.

Competition with loss-based con-gestion control. BBR converges to-ward a fair share of the bottleneckbandwidth whether competing withother BBR flows or with loss-basedcongestion control. Even as loss-based congestion control fills theavailable buffer, ProbeBW still ro-bustly moves the BtlBw estimatetoward the flow’s fair share, andProbeRTT finds an RTProp estimatejust high enough for tit-for-tat con-vergence to a fair share. Unmanagedrouter buffers exceeding severalBDPs, however, cause long-lived loss-based competitors to bloat the queueand grab more than their fair share.Mitigating this is another area of ac-tive research.

Conclusion

Rethinking congestion control pays
big dividends. Rather than using
events such as loss or buffer occupancy, which are only weakly correlated
with congestion, BBR starts from
Kleinrock’s formal model of congestion and its associated optimal operating point. A pesky “impossibility”
result that the crucial parameters
of delay and bandwidth cannot be
determined simultaneously is side-stepped by observing they can be estimated sequentially. Recent advances
in control and estimation theory are
then used to create a simple distributed control loop that verges on the
optimum, fully utilizing the network
while maintaining a small queue.
Google’s BBR implementation is
available in the open source Linux
kernel TCP.

BBR is deployed on Google’s B4
backbone, improving throughput by
orders of magnitude compared with
CUBIC. It is also being deployed on
Google and You Tube Web servers, substantially reducing latency on all five
continents tested to date, most dramatically in developing regions. BBR
runs purely on the sender and does
not require changes to the protocol,
receiver, or network, making it incrementally deployable. It depends only
on RTT and packet-delivery acknowledgment, so can be implemented for
most Internet transport protocols.

Acknowledgments

Thanks to Len Kleinrock for pointing
out the right way to do congestion control and Larry Brakmo for pioneering
work on Vegas2 and New Vegas congestion control that presaged many
elements of BBR, and for advice and
guidance during BBR’s early development. We also thank Eric Dumazet,
Nandita Dukkipati, Jana Iyengar, Ian
Swett, M. Fitz Nowlan, David Wether-all, Leonidas Kontothanassis, Amin
Vahdat, and the Google BwE and YouTube infrastructure teams.