Share this story

Google, as is its wont, is always trying to make the World Wide Web go faster. To that end, Google in 2009 unveiled SPDY, a networking protocol that reduces latency and is now being built into HTTP 2.0. SPDY is now supported by Chrome, Firefox, Opera, and the upcoming Internet Explorer 11.

But SPDY isn't enough. Yesterday, Google released a boatload of information about its next protocol, one that could reshape how the Web routes traffic. QUIC—standing for Quick UDP Internet Connections—was created to reduce the number of round trips data makes as it traverses the Internet in order to load stuff into your browser.

Although it is still in its early stages, Google is going to start testing the protocol on a "small percentage" of Chrome users who use the development or canary versions of the browser—the experimental versions that often contain features not stable enough for everyone. QUIC has been built into these test versions of Chrome and into Google's servers. The client and server implementations are open source, just as Chromium is.

Roskind apparently goes by the title of "RTT Reduction Ranger" at Google, referring to "round trip time." Roskind wrote that round trip time, "which is ultimately bounded by the speed of light—is not decreasing, and will remain high on mobile networks for the foreseeable future." QUIC, he writes, "runs a stream multiplexing protocol over a new flavor of Transport Layer Security (TLS) on top of UDP instead of TCP. QUIC combines a carefully selected collection of techniques to reduce the number of round trips we need as we surf the Internet."

An FAQ and an in-depth design document provide more information than most people would want to know about QUIC. Besides running multiplexed connections over UDP, QUIC was "designed to provide security protection equivalent to TLS/SSL, along with reduced connection and transport latency," the FAQ states.

"QUIC will employ bandwidth estimation in each direction into congestion avoidance, and then pace packet transmissions evenly to reduce packet loss," Google says. "It will also use packet-level error correction codes to reduce the need to retransmit lost packet data. QUIC aligns cryptographic block boundaries with packet boundaries so that packet loss impact is further contained."

Google had to design QUIC carefully to avoid it becoming a nice theoretical system with no applicability to the real world. That's why Google is using UDP instead of building a protocol made entirely of new technologies. "Middle boxes on the Internet today will generally block traffic unless it is TCP or UDP traffic," Google said. "Since we couldn’t significantly modify TCP, we had to use UDP. UDP is used today by many game systems, as well as VoIP and streaming video, so its use seems plausible."

Ultimately, Google's goal is not necessarily to replace the Web's current protocols but to bring improvements to how TCP is used with SPDY. SPDY already provides multiplexed connections over SSL, but it runs across TCP, causing some latency issues.

Whereas TCP uses a three-step process (or a "handshake") to negotiate connections between Web users and servers, UDP is handshake-less. UDP sends packets out the door without any error checking, improving speed while reducing reliability. QUIC attempts to provide the speed advantages of UDP while making data delivery more reliable.

From Google's QUIC FAQ:

Why can’t you just evolve and improve TCP under SPDY? That is our goal. TCP support is built into the kernel of operating systems. Considering how slowly users around the world upgrade their OS, it is unlikely to see significant adoption of client-side TCP changes in less than 5-15 years. QUIC allows us to test and experiment with new ideas, and to get results sooner. We are hopeful that QUIC features will migrate into TCP and TLS if they prove effective.

A major problem with SPDY over TCP today is that "[a] single lost packet in an underlying TCP connection stalls all of the multiplexed SPDY streams over that connection," Google said. "With UDP, QUIC can support out-of-order delivery, so that a lost packet will typically impact (stall) at most one stream."

TCP and TLS/SSL also typically "require one or more round trip times (RTTs) during connection establishment," Google said. "We’re hopeful that QUIC can commonly reduce connection costs toward zero RTTs. (i.e., send hello, and then send data request without waiting)."

Google doesn't know just how much faster QUIC will make Web surfing, because in-house tests often differ significantly from real-world network conditions. That's why testing with actual Web users is crucial. The question of how much QUIC is able to reduce latency in the real World Wide Web is what "we are investigating at the moment, and why we are experimenting with various features and techniques in Chromium," Google said. "It is too early to share any preliminary results—stay tuned."

Promoted Comments

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

That's why UDP is used only when you can afford to lose data - usually, real-time applications such as Voice over IP or Video Streaming. If you're on a VoIP call and you can't make out a word, you can ask the peer to repeat, or your brain can fill the void. If you're watching a live event, you can live with a part of the screen being pixelized until the next full refresh of the video (I frame) is received.

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

That's why UDP is used only when you can afford to lose data - usually, real-time applications such as Voice over IP or Video Streaming. If you're on a VoIP call and you can't make out a word, you can ask the peer to repeat, or your brain can fill the void. If you're watching a live event, you can live with a part of the screen being pixelized until the next full refresh of the video (I frame) is received.

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

You are correct, their white paper does discuss their retransmission feature:

Quote:

RETRANSMISSION RECOVERY FROM PACKET LOSS

In cases where packet losses have exceeded the error-recovery limits of the protocol, a request for retransmission may be implicitly or explicitly produced. The techniques for instigating retransmission will be modeled after the TCP protocol. In keeping with that system, acknowledgements of data that is received will be periodically sent, and timeouts may be used to spontaneously instigate a retransmission when no acknowledgement is received. Acknowledgments will also serve a key role in delivering congestion related bandwidth information, analogously to TCP’s ACK’s impact on its congestion windows.

Projected performance gains?The improvement via use of UDP for every day traffic has been on the cards for as long as I can recall. Some sort of packet ID to help re-sort packets and identify missing ones. It's A bit like a parent calling kids at a beach. They don't wait until 1 child responds, they count off kids until all are accounted for, or the missing kids are identified, then the search (or retransmit) is narrowed considerably.

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

They implement that in QUIC. Basically the idea is to implement something like SCTP which allows multiple streams over a single connection that are each in-order and reliable, but don't block each other. If you do multiplexing over a TCP connection (like SSH), then a lost packet stalls every stream because the OS won't give you the data out of order. In addition, they are implementing some 'opportunistic' behavior where they don't always wait for replies to send more data. TCP already does this during transmission (that is what the TCP window is), but google wants to extend that to session startup. Probably they are going to get smarter about handling packet loss as well. TCP always interprets packet loss as congestion and backs off the transmission speed. In wireless networks, packet loss can be also due to signal integrity. A fixed low level of packet loss will dramatically slow down a TCP session.

People are already working on improving TCP, or adding generic new protcols like SCTP that could be used by all applications. Unfortunately, adoption is slow, and home gateways and firewalls often block everything except TCP and UDP, and will often fail to handle unknown TCP extensions in a standards compliant fashion. This means that people adoptig new protocols often loose connectivity to places with defective firewalls, which really hurts adoption.

By building onto UDP, they can at least ensure that all but the most agressive existing firewalls will support it, although then they have to deal with the annoying psuedo-stateful connection tracking on NAT gateways...

Sounds like it's essentially a pared-down implementation of a NORM proxy, with some extra stuff thrown in to do TLS, etc. While NORM does have "multicast" in its name, it works just as well for unicast.

My problem with this protocol has to do with encryption. Any energy loss from a system, in this case the interception of packets between sender and receiver, increases the liklihood of key recovery over time. Something to think about, although that isn't an attack you can use retroactively.

Otherwise, nice. I have yet to see packet loss over UDP here, so this should do nicely when it hits mainstream.

The motivation is that the IETF moves slowly. Modifying TCP would take years and years. For example, the mobile world has Mobile IP to deal with weaknesses of TCP/IP.

I'm actually going to read the documents ... Looks like taking a few pages out of layer 1 implementations, such as the error correction codes (forward error correction) or hybrid automatic repeat request (HARQ).

They seem to address this pretty well in their document. QUIC is more or less equivalent to SCTP-over-DTLS, except that they combine the two protocols into one to avoid inefficiencies from the two layers being separate, plus they throw in forward error correction, and they also feel that neither SCTP nor DTLS was designed to minimize the number of round trips to establish a connection, and they say QUIC can do better.

That's why Google is using UDP instead of building a protocol made entirely of new technologies. "Middle boxes on the Internet today will generally block traffic unless it is TCP or UDP traffic," Google said. "Since we couldn’t significantly modify TCP, we had to use UDP. UDP is used today by many game systems, as well as VoIP and streaming video, so its use seems plausible."

The deviation from the original end-to-end principle -- where end nodes carry all packet/session state, parameters, retransmission, error correction; and the routers in the middle mostly just route -- has been impacting our ability to keep innovating the basic building blocks of internets. So many devices molest TCP today that some network users resort to moving big data with UDP.

SCTP has been around for quite a while, and I've yet to see it in the wild, or see any applications specifically support or require it. I doubt most proxies in use support SPDY.

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

That's why UDP is used only when you can afford to lose data - usually, real-time applications such as Voice over IP or Video Streaming. If you're on a VoIP call and you can't make out a word, you can ask the peer to repeat, or your brain can fill the void. If you're watching a live event, you can live with a part of the screen being pixelized until the next full refresh of the video (I frame) is received.

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

All the applications that can use UDP for data tranmissions of course do include internal mechanisms for packet loss and retransmission. Very few of these don't include some mechanism. You may be thinking of items like syslog or SNMP traps, which are only one way transmissions and do not include any error correcting or re-transmission mechanism. Take a look at one vendors solution. I have used this vendors solution in the past for some high speed transfer requirements that just could not be met with standard TCP applications. The industry really needs to come together and get something like this built into the standard OS. - http://asperasoft.com/resources/benchmarks/#vsftp-630

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered.

It's also what makes UDP a much better DDOS medium. I can't wait to send a spoofed GET /giganticfile request with my buddy's IP address! Even if it only sends 50 packets before timing out, 50:1 ratio ain't bad.

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

You are correct, their white paper does discuss their retransmission feature:

Quote:

RETRANSMISSION RECOVERY FROM PACKET LOSS

In cases where packet losses have exceeded the error-recovery limits of the protocol, a request for retransmission may be implicitly or explicitly produced. The techniques for instigating retransmission will be modeled after the TCP protocol. In keeping with that system, acknowledgements of data that is received will be periodically sent, and timeouts may be used to spontaneously instigate a retransmission when no acknowledgement is received. Acknowledgments will also serve a key role in delivering congestion related bandwidth information, analogously to TCP’s ACK’s impact on its congestion windows.

So if I'm reading this right, it's something like the difference between:

A: Here's a block of data.A: Did you get that block?B: Yes, I got that block.A: Here's another one.A: Did you get that one?B: Yes, I got that one.

et al, vs

...A: Here's a block. And 99 more, since I know where to send these already.B: I got this list of 98 blocks, is that everything you sent?A: Nope, you missed 2, here they are. Also you neglected to tell me you got these 5 from the previous chunk, so I'm sending those again as well....

If that vastly oversimplified interpretation is more or less correct, this seems like an entirely sensible compromise.

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

You are correct, their white paper does discuss their retransmission feature:

Quote:

RETRANSMISSION RECOVERY FROM PACKET LOSS

In cases where packet losses have exceeded the error-recovery limits of the protocol, a request for retransmission may be implicitly or explicitly produced. The techniques for instigating retransmission will be modeled after the TCP protocol. In keeping with that system, acknowledgements of data that is received will be periodically sent, and timeouts may be used to spontaneously instigate a retransmission when no acknowledgement is received. Acknowledgments will also serve a key role in delivering congestion related bandwidth information, analogously to TCP’s ACK’s impact on its congestion windows.

So if I'm reading this right, it's something like the difference between:

A: Here's a block of data.A: Did you get that block?B: Yes, I got that block.A: Here's another one.A: Did you get that one?B: Yes, I got that one.

et al, vs

...A: Here's a block. And 99 more, since I know where to send these already.B: I got this list of 98 blocks, is that everything you sent?A: Nope, you missed 2, here they are. Also you neglected to tell me you got these 5 from the previous chunk, so I'm sending those again as well....

If that vastly oversimplified interpretation is more or less correct, this seems like an entirely sensible compromise.

TCP isn't quite that simple, though. It decides on a window size, which is how many packets ahead the sending side can be before it has to stop and wait for confirmations. That allows a fair bit of flex in the ACKs from the receiver, and works ok for most transfers. It has some extra rules, in that the sender reduces its transmit speed if it hits the end of the window, on the assumption that this means that it's sending faster than the data can make it to the other end. (It tries increasing it again, so it's not a permanent slowdown.) Again, this works out fairly well in many scenarios, and is crucial to keep network throughput at efficient levels.

There are certainly ways it can be inefficient, but it's honestly quite well thought out.

Also, for the case where certain packets are missing, TCP has SACK ("The last package I got that was continuous was 156, but I also got 158 and 159"), which helps a bit.

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

No, the problem is that TCP assumes that all packet loss is due to congestion. When routers receive more packets than they can route (e.g. the downlink is much slower than the uplink), they drop packets randomly. TCP uses this as an indicator of congestion and backs down on transmission rate.

But with mobile networks, packets often get lost due to poor link quality rather than congestion. TCP is notoriously bad at handling that, slowing down unnecessarily instead of continuing to go as fast as it could have been. When SPDY encounters packet loss due to poor link quality, everything slows down, unnecessarily. QUIC works around this.

(another way to work around that is to have multiple parallel TCP streams, which is what conventional HTTP uses... but that has other drawbacks)

Shouldn't SACK help a bit with that? If sporadic packets are lost but the overall ACK rate is sufficient, and if I remember correctly and congestion avoidance is triggered by window overruns, it should greatly reduce the impact from losing a few ACKs.

Of course, if I'm wrong (kind of likely) and CA is triggered by packet loss instead, then yeah, it won't help at all.

Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

More reliable yes but less packet loss?!?!

Traffic leaves my computer at gigabit speeds, and then 99% of the packets are dropped by my DSL modem, and TCP detects massive packet loss, then my PC sends the data again at megabit speeds.

The internet is more reliable than it used to be but packet loss is a necessary component of how it works. Without packet loss the internet would collapse, because it's the only way to know how fast each individual link is between two points on the internet.

I'm interested to see how this new protocol handles packet loss, and also how they managed to get security without at least one round trip to check that the data is actually being sent to the server the SSL certificate was issued for. Perhaps they are using asymetric encryption for the request and then symetric encryption for the response? Can a smartphone CPU do that without being even slower than a 3G connection? I guess it would be fine for a GET request.

My problem with this protocol has to do with encryption. Any energy loss from a system, in this case the interception of packets between sender and receiver, increases the liklihood of key recovery over time. Something to think about, although that isn't an attack you can use retroactively.

That would only be true if we kept using the same key for years. SSL generates a new key for every connection and only uses it's private key to encrypt temporary symetric keys. Very hard to do cryptanalisis on strong ecryption that is encrypting a strong random key.

Otherwise, nice. I have yet to see packet loss over UDP here, so this should do nicely when it hits mainstream.

UDP has low packet loss if you send almost no data. If you try to send a lot of data (more than a few KB) you will see packet loss. Send enough data too quickly and you'll have almost 100% packet loss. The reason we don't use UDP much is it's really hard to measure out how much data you can send before packet loss starts happening (the max safe speed changes from one millisecond to the next, one moment you can send 1MB/s then you can only send 10KB/s, and then a millisecond later you can send 1MB/s again).

It's also what makes UDP a much better DDOS medium. I can't wait to send a spoofed GET /giganticfile request with my buddy's IP address! Even if it only sends 50 packets before timing out, 50:1 ratio ain't bad.

This is what I was thinking reading this article... The TCP handshake at least tries to validate that the request really comes from the requester. But I'm guessing Google knows about that

To all the naysayers: consider that uTP (also a "tunnel new protocol over UDP to get around bad TCP behaviour"-type of thing) has been around for a while and actually works well. Google is probably even better at inventing new protocols.

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

While TCP certainly does what it does pretty well, I think part of the problem is that one of Google's aims is to reduce the number of TCP connections, which means they're using less TCP connections to handle multiple HTTP requests each. However, if a single connection is receiving packets that correspond to say, five different images served over HTTP, and one of those images drops a packet, then all five may become stalled while the retransmission occurs.

This is of course a problem that UDP avoids by have no state, and have no true concept of a connection, but it means you have to build a custom awareness of this so you can track which UDP packets are missing, but more importantly, which of the streams you're multiplexing it belonged to.

The ultimate aim is promising, as it could give the same TCP advantages, but through what is effectively one connection (or at least, one utilised port since it's UDP). It does mean you lose any benefit from network hardware doing the work for you, but that's not such a big deal in practice I think.

I expect Google would rather the TCP standard was updated so that it gained some concept of streams, so you could tag your TCP packets as belonging to different streams, so that retransmission only stalls other packets belonging to that stream, while others continue to be processed. Though I'm not sure if I agree that it should be up to TCP to do this; personally I think a new standard would be better, but that would require new hardware through the entire web. So I think ultimately Google's decision to try to add reliability onto UDP is the correct solution when you factor in the lack of hardware requirements, as it means routers don't need to care what is being sent, track state or anything else complex, but instead they simply shove the packet in the right direction and wait for the next one.

I suppose it does raise the issue of whether the performance would be better though; as you say the internet is generally quite reliable now, so packet loss via TCP occurs relatively infrequently so it shouldn't necessarily be a problem. That said, a very large amount of traffic now occurs wirelessly, for which packet loss is a very significant factor, so while wired machines probably won't notice a difference in practice, mobile ones could benefit a great deal from a UDP variant that is essentially multiplexed TCP.

Right now, implementing multiple HTTP streams over TCP involves multiple TCP connection setups AND multiple HTTP negotiations (including SSL negotiations.) While I haven't read the QUIC protocol definition, if QUIC can share a single HTTP/SSL-like negotiation over a "control" exchange between multiple UDP sessions, then you can save a lot of session setup overhead for things like CSS style sheets and Javascripts and so on that get downloaded from a single website. (Of course, if the web site designed decides to split style sheets, etc. across multiple servers/websites, then this will reduce QUIC's benefits.)

I haven't read the QUIC proposal - I'm just guessing what I'd do if I could control both the browser and the web server like Google can.

As a sys-admin (and former software engineer involved in a UDP based service), I don't see this helping the run-of-the-mill shared web hosting service, or small web hosting sites. Only massive sites like YouTube, Google, Amazon (for it's own business, not it's hosting business), Netflix, and Hulu will benefit.

Yes, TCP adds some overhead due to the handshake and the "statefulness". However it's also what makes it a reliable protocol, where you need to guarantee that a payload is delivered. If you lose 1500 bytes on an image, the quality of the image will be degrated. Or some words or layout in a HTML article could be missing or incomplete. This is why TCP implements Retransmission and acknowledgement of received data.

That's why UDP is used only when you can afford to lose data - usually, real-time applications such as Voice over IP or Video Streaming. If you're on a VoIP call and you can't make out a word, you can ask the peer to repeat, or your brain can fill the void. If you're watching a live event, you can live with a part of the screen being pixelized until the next full refresh of the video (I frame) is received.

I haven't read their white paper yet but I assume they built-in retransmission feature in QUIC to compensate for this. Internet is much more reliable now than when TCP was created, with less packet loss, so maybe we don't need the layer 4 (transport) layers to handle reliability of data transport anymore?

Traffic leaves my computer at gigabit speeds, and then 99% of the packets are dropped by my DSL modem, and TCP detects massive packet loss, then my PC sends the data again at megabit speeds.

The internet is more reliable than it used to be but packet loss is a necessary component of how it works. Without packet loss the internet would collapse, because it's the only way to know how fast each individual link is between two points on the internet.

That's not at all how TCP links work. Packets get queued, not lost and retransmitted, in that situation, and every interface expects downlevel interfaces to run at different and potentially slower speeds. TCP speed ramps up, not down, and that's one of the reasons it's so slow to start (Nagle's Algorithm), and when packet loss occurs, it often starts all the way over from zero again.

So I go to a page with a list of links to 100 1-hour videos, but I don't know which one I want to see.I click on the first, it begins, and I don't like it so I hit the "Back" button on my browser.I click on the second, it begins, and I don't like it so I hit the "Back" button on my browser....

After 100 tries, I have watched a total of 100 1-second clips of video, so I want to be charged for a minute-forty worth of video.

But how much data was sent to me? It's not acceptable to have 100 unstoppable streams pouring into my ISP's network. I have to pay PER BYTE DELIVERED, with ruinous overage charges. I want a protocol that CAN BE STOPPED! Doesn't look like this one stops so good.

So I go to a page with a list of links to 100 1-hour videos, but I don't know which one I want to see.I click on the first, it begins, and I don't like it so I hit the "Back" button on my browser.I click on the second, it begins, and I don't like it so I hit the "Back" button on my browser....

After 100 tries, I have watched a total of 100 1-second clips of video, so I want to be charged for a minute-forty worth of video.

But how much data was sent to me? It's not acceptable to have 100 unstoppable streams pouring into my ISP's network. I have to pay PER BYTE DELIVERED, with ruinous overage charges. I want a protocol that CAN BE STOPPED! Doesn't look like this one stops so good.

Please explain how the protocol handles the above situation.

You could always do a little research if it's that important to you. The short answer is that there's a version of TCP Reset, the long answer is in the spec.

Besides, no one would ever design an "unstoppable" protocol, it'd be silly and wasteful. Just like every other UDP protocol, abandoned streams are eventually stopped in QUIC.

Thank you for responding. I am reassured that someone has thought the protocol through beyond just how to make the internet into TV. My ISP can deliver content at 30Mbps which means in ONE MINUTE I can blow through my DAILY data allotment. So "eventual" stopping naturally makes me nervous.

Is the QUIC protocol as fast to stop as the UDP protocol now commonly (pick one) used for video streaming?