|
|
Subscribe / Log in / New account

A QUIC look at HTTP/3

LWN.net needs you!

Without subscribers, LWN would simply not exist. Please consider signing up for a subscription and helping to keep LWN publishing

March 13, 2020

This article was contributed by Marta Rybczyńska

The Hypertext Transfer Protocol (HTTP) is a core component of the world-wide web. Over its evolution it has added features, including encryption, but time has revealed its limitations and those of the whole protocol stack. At FOSDEM 2020, Daniel Stenberg delivered a talk about a new version of the protocol called HTTP/3. It is under development and includes some big changes under the hood. There is no more TCP, for example; a new transport protocol called QUIC is expected to improve performance and allow new features.

HTTP/1 and HTTP/2

Each HTTP session requires a TCP connection which, in turn, requires a three-way handshake to set up. Once that is done, "we can send data in a reliable data stream", Stenberg explained. TCP transmits data in the clear, so everyone can read what is transferred; the same thus holds true for the non-encrypted HTTP protocol. However, 80% of requests today are using the encrypted version, called Hypertext Transfer Protocol Secure (HTTPS), according to statistics of Mozilla (Firefox users) and Google (Chrome users). "The web is getting more and more encrypted", Stenberg explained. HTTPS uses Transport Layer Security (TLS); it adds security on the top of the stack of protocols, which are (in order): IP, TCP, TLS, and HTTP. The cost of TLS is another handshake that increases the latency. In return, we get privacy, security, and "you know you're talking to the right server".

HTTP/1 required clients to establish one new TCP connection per object, meaning that for each request, the browser needed to create a connection, send the request, read the response, then close it. "TCP is very inefficient in the beginning", Stenberg explained; connections transmit data slowly just after being established, then increase the speed until they discover what the link can support. With only one object to fetch before closing the connection, TCP was never getting up to speed. In addition, a typical web page includes many elements, including JavaScript files, images, stylesheets, and so on. Fetching one object at a time is slow, so browser developers responded by creating multiple connections in parallel.

That created too many connections to be handled by the servers, so typically the number of connections for each client was limited. The browser had to choose which of its few allowed connections to use for the next object; that led to the so-called "head-of-line blocking" problem. Think of a supermarket checkout line; you might choose the one that looks shortest, only to be stuck behind a customer with some sort of complicated problem. A big TCP efficiency improvement was added for HTTP/1.1 in 1997: open TCP connections can be reused for other requests. That improved the slow-start problem, but not the head-of-line blocking issue, which can be made even worse.

HTTP/2 from 2015 uses a single connection per host, allowing TCP to get up to speed. However, the head-of-line blocking problem became even more serious at the TCP connection level. In HTTP/1 the problem was that one longer request could block others waiting for the same connection. In HTTP/2, the single connection carries hundreds of streams. In this case, when we lose one packet, "one hundred streams are waiting for that one single packet", Stenberg said. As a reminder, this is because TCP will retransmit the missing packet only when the network stack figures out that it was lost, and the network stack will only pass the data received after the gap when the missing packet arrives.

The "boxes"

Another trend Stenberg explained is protocol ossification (which LWN looked at in 2018). He explained it in the following way: the Internet is full of "boxes" (they are often called "middleboxes") such as routers and gateways. They were installed at some time and are running software to handle networking protocols as they existed at that time. The problem is that, "they know how the Internet works — at the time they were installed". For those boxes, if a given packet-header field was always zero, it is never going to be anything else. What is worse is that those boxes do not get upgraded. They are "stuck in time", he said. This is different than the servers or the web browsers, which are updated regularly.

The existence of those boxes brings limitations to the development of new versions of the HTTP protocol. An example of this is the use of TCP protocol port 80 assigned for HTTP/1.1, which is unencrypted. Currently no browser speaks HTTP/2.0 in clear text on that port. "One browser tried to do it until they figured out it doesn't work", he said. The middleboxes modified (or blocked) the traffic based on their understanding of HTTP/1, breaking HTTP/2 traffic.

Another idea to improve the protocol was to send data earlier in the TCP connection, a functionality called TCP fast open (or TFO; LWN covered it in 2012). It allowed browsers to send request data in the packets of the TCP handshake itself. Stenberg explained that it took five to seven years until all kernels supported it. Then the browsers tried it ... and it did not work. Middleboxes would just drop the TFO packets. Currently no browser enables TFO by default. A similar story happened with Brotli compression. The middleboxes only know gzip, so they break the connections using Brotli. Currently this compression is used only over HTTPS. He concluded that the introduction of new transport protocols does not work, because "your router at home will only route TCP and UDP".

The definition of HTTP/3

The difficulties with innovation in HTTP were one of the reasons for the creation of the QUIC working group at the IETF in 2016. QUIC is a name, not an acronym, Stenberg highlighted. A number of companies are interested in this development. The work of the IETF group is built on experiments with Google QUIC, a protocol deployed first in 2013 (LWN looked at it that year). The experiments used HTTP requests over UDP, with widely used client and web services. This experiment carried a fair amount of HTTP traffic, and was taken to IETF, where the working group started. Currently the IETF version is significantly different from the Google one: it includes a new transport protocol and application level.

IETF's QUIC fixes the head-of-line blocking issues and allows early data transmission like TCP fast open does. The encryption is built-in; no clear-text version of QUIC exists. HTTP/3 implemented over QUIC includes fewer clear-text messages than HTTP/2.

During the development of QUIC, the group also addressed some other modern challenges. TCP was defined with a connection tied to an IP address. Currently, devices can have multiple addresses and change them when users move around. With TCP, a new connection must be created when the interface address changes. QUIC uses a session identification separate from IP addresses to solve this problem.

QUIC uses UDP, but in a limited fashion that is more similar to the use of IP than UDP. The transport layer is in the higher layer of QUIC, above UDP; it adds connections, reliability, flow control, and security. A big difference with TCP is how QUIC handles streams within a connection. QUIC can send multiple streams in a single connection, in either direction. They are all independent, initiated by the server or the client. If a packet is lost, the implementation knows which stream is affected; only that stream will have to wait for a retransmission. The streams are internally reliable and in-order.

Applications run on top of QUIC. The protocol definition was started for HTTP; others are expected to follow, DNS for example. The definition of other application protocols is expected to start around when QUIC ships.

HTTP over QUIC is the "same but different", Stenberg said. There will still be the GET command that should be familiar to most readers, but the way the command is transmitted changes. Stenberg explained the history of HTTP: HTTP/1 was in ASCII, HTTP/2 was binary multiplexed, and HTTP/3 is binary over multiplexed QUIC, with TLS 1.3.

HTTP/3 will be faster thanks to the improved handshakes. Early numbers from the experiments showed 70% of connections with no round-trip-time (RTT) delay because the connections were already there had been established previously. The protocol allows early data, so it should improve latency even when a connection does not already exist. The independent streams should also help in low-quality networks. He noted that he could not show numbers for now, as the protocol is not finished yet. However, the expectations is that it will be "a little better to much better".

Deployment

HTTPS URLs are everywhere; they cannot be replaced without rewriting the entire web. They imply the use of TCP port 443 with TLS. The migration to HTTP/3 will thus require a connection to a legacy server. If a site supports HTTP/3, it will provide an Alt-svc header giving the server to connect to. Browsers will check that and make the second connection in the background, or they will just try both protocols at the same time. "There will be a lot of probing", he noted. There will also be support in the domain name system in the form of a new record called HTTPSSVC that will allow the provision of information on the connection parameters. In practice, it will mean asking the DNS first to check if HTTP/3 can be used.

There will be a few challenges. One difficulty may be that many companies block UDP by default as a way of blocking distributed denial-of-service attacks. With UDP, 3-7% of connections will fail due to blocking somewhere in the network. Clients need to have fallback algorithms and use them transparently. That leads to another problem: there will be no incentive to unblock UDP because the fallback will be in place.

As of today, QUIC stacks are implemented in user space to allow easy testing. "But you need to stick to one library as there are no standard APIs", Stenberg said. There are a dozen implementations right now, in many languages. Interoperability tests happen every month and the current version of the protocol as of March 2020 is draft 27.

HTTP/3 is expected to use two to three times more CPU time than the earlier versions for the same bandwidth consumption. This might delay deployment for a while. One of the reasons is that UDP is not well optimized in Linux, while "we've been polishing TCP for years", he said. Currently UDP is not made for high-volume traffic and there is no hardware offload for QUIC. In addition, performance suffers since there are also quite a few transitions between kernel and user space because the protocol stack is implemented in user space. For now, he doesn't know if QUIC will be moved into the kernel. There are some efforts to do so, but it requires a new implementation of TLS in the kernel.

TLS usage in QUIC is different, so that existing offloads will not work. The TLS protocol transmits data using "TLS records"; the records may include one or more TLS messages, and one message may span over the record boundary. In the case of TLS over TCP, both records and messages are used. Over QUIC it will send messages only, records are not needed anymore. This changes the way the TLS libraries are used and the needed APIs.

As the use of the TLS library changes between TCP and QUIC, new APIs are necessary. An OpenSSL pull request adding the QUIC APIs (PR 8797) is still being discussed; this is expected to take a while. Then, when it gets accepted, there will be another delay until it is available in a release and deployed.

Changes to the transport protocol will also force changes in the associated tools. tcpdump is not ready yet, for example. The existing tools that do understand QUIC are Wireshark and the two QUIC-specific tools qlog and qvis. Stenberg is the author of curl, which supports the latest drafts (version 25 at the time of his talk in February 2020), but without the fallback functionality; "fallback is tricky", he says. He summarized that "there is definitely a shortage" of tools and a lot of work to do.

On the browser side, nightly builds of Chrome and Firefox can have HTTP/3 enabled. For those who want to run experiments, they need to enable some specific options. In Firefox, nightly builds include HTTP/3 support; the user should go to about:config and change network.http.http3.enabled to true. Chrome Canary (not for Linux) requires specific options when launching: --enable-quic and --quic-version=h4-25 [at the time of the talk, see comments]. On the server side, an NGINX patch exists to use quiche quiche (a library implementing QUIC) for experiments. However, the other big servers, including Apache, IIS, and the official version of NGINX, do not have it yet. There is no support in the Safari browser either.

The date when the protocol will ship is not set yet, as the group prefers to do it right, not fast; he hopes for July 2020. Currently the libraries are in alpha versions; they will ship when the specification is ready. Browsers require updates of the TLS libraries. The deployment is expected to take time. He expects that it will grow more slowly than HTTP/2, but HTTP/3 is there for the long term.

Once the protocol is ready, people are waiting to add new features to QUIC, including multipath (accessing the same site using different network connections), forward error correction, and unreliable and partially reliable streams ("for video people"). Of course, other applications will also appear. QUIC development will move to version 2 after version 1 ships.

Slides [PDF] and a video of the talk are available.


Index entries for this article
GuestArticlesRybczynska, Marta
ConferenceFOSDEM/2020


(Log in to post comments)

A QUIC look at HTTP/3

Posted Mar 13, 2020 21:51 UTC (Fri) by djc (subscriber, #56880) [Link]

Last week, the working group did indeed set goals to send the drafts to the IESG (i.e. plan to be done making changes) in July 2020.

(I am one of the implementers of one of the Rust implementations of QUIC, called Quinn.)

A QUIC look at HTTP/3

Posted Mar 14, 2020 5:28 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

I'm disappointed that they punted on multihoming support. This is really needed for SOHO networks that want to utilize more than one ISP.

And with the rate of protocol ossification, if you don't do something in the V1, it'll never get done.

A QUIC look at HTTP/3

Posted Mar 14, 2020 6:19 UTC (Sat) by djc (subscriber, #56880) [Link] (3 responses)

The protocol has been carefully designed to prevent ossification (including several greasing features), so it seems likely that they've bought themselves some time in that regard.

A QUIC look at HTTP/3

Posted Mar 14, 2020 6:23 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

QUIC is carefully designed to be middlebox-proof. But it won't prevent ossification of endpoints. And multihoming is kind of a feature that requires a lot of serious changes, so it's likely to go unimplemented in many stacks.

A QUIC look at HTTP/3

Posted Mar 14, 2020 11:36 UTC (Sat) by draco (subscriber, #1792) [Link] (1 responses)

Even if the specification says how to do multi-homing, that's no guarantee that implementations will support it properly since you can't force an endpoint to have multiple addresses.

A QUIC look at HTTP/3

Posted Mar 14, 2020 17:41 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

It's fine to have a single-address server endpoint, but they need to be able to talk to multi-address clients. I guess it would be the most used configuration.

A QUIC look at HTTP/3

Posted Mar 14, 2020 12:15 UTC (Sat) by ghedo (subscriber, #69832) [Link]

> On the server side, an NGINX patch exists to use quiche (a library implementing QUIC) for experiments

FTR, that links to the wrong quiche :)
https://github.com/cloudflare/quiche/tree/master/extras/n...

A QUIC look at HTTP/3

Posted Mar 14, 2020 12:31 UTC (Sat) by jkingweb (subscriber, #113039) [Link] (25 responses)

I tried to read through the SVCB/HTTPSSVC specification, but my brain turned to mush a few pages in. Though it mentions that HTTP has "special requirements", it wasn't clear to me why these couldn't be handled by a properly featureful SVCB design.

It seems weird to me, in other words, that they would design a generic feature for a new use case without meeting the needs of the prototype user.

Does anyone smarter than me have some insight?

Links for SVCB and HTTPSSVC

Posted Mar 15, 2020 3:14 UTC (Sun) by CChittleborough (subscriber, #60775) [Link]

To save other readers some searching, the spec for these is an IETF draft issued 2020-03-09. I also found a PDF of the slides from a talk given in November 2019.

Warning: the spec says the RR types "SVCB" and "HTTPSSSVC" may well be renamed in the future.

A QUIC look at HTTP/3

Posted Mar 15, 2020 6:08 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (23 responses)

> I tried to read through the SVCB/HTTPSSVC specification, but my brain turned to mush a few pages in.
This spec basically defines a way to store a set of arbitrary key-value pairs in DNS. With defined keys for ALPN, ESNI and other users. But it's supposed to be general-purpose, so it can be extended at will.

Personally, I'm waiting for https://tools.ietf.org/html/draft-ietf-tls-dnssec-chain-e... - this would solve the certificate mess once and for all.

A QUIC look at HTTP/3

Posted Mar 15, 2020 18:52 UTC (Sun) by josh (subscriber, #17465) [Link] (17 responses)

> Personally, I'm waiting for https://tools.ietf.org/html/draft-ietf-tls-dnssec-chain-e... - this would solve the certificate mess once and for all.

I certainly like the idea of using an existing connection to a site for authoritative information about its subdomains, for instance. But I don't see anything in that to solve the problem of securely establishing the initial connection; DNSSEC is only as secure as the DNS hierarchy, so that just replaces a root-CA problem with a domain registry problem.

A QUIC look at HTTP/3

Posted Mar 15, 2020 20:05 UTC (Sun) by pizza (subscriber, #46) [Link] (14 responses)

> DNSSEC is only as secure as the DNS hierarchy, so that just replaces a root-CA problem with a domain registry problem.

Nonetheless, it represents a (massive!) net improvement. The CA problem is such that any CA can issue a cert for any domain. With DNSSEC you'd have to compromise the registrar for that specific domain, or the TLD registry, or the root certs themselves -- a far smaller attack surface.

A QUIC look at HTTP/3

Posted Mar 15, 2020 23:00 UTC (Sun) by josh (subscriber, #17465) [Link] (13 responses)

Do we have an equivalent to Certificate Transparency for DNSSEC? If not, the CA system still seems more secure at the moment.

A QUIC look at HTTP/3

Posted Mar 15, 2020 23:54 UTC (Sun) by pizza (subscriber, #46) [Link] (12 responses)

The need for something like Certificate Transparency only occurs because any CA can issue a valid certificate for any domain, and any issued certificate is by definition presumed to be valid.

This structural flaw simply does not exist with DNSSEC (and DANE), as certificates can only be published by someone with administrative rights to the underlying domain -- something that is already necessary to obtain certificates by non-fraudulent means.

A QUIC look at HTTP/3

Posted Mar 16, 2020 7:47 UTC (Mon) by josh (subscriber, #17465) [Link] (11 responses)

That's not at all what I mean, no.

I'm asking if anyone has built a scheme similar to Certificate Transparency that requires DNSSEC record changes to be published in an indelible "DNS Transparency" log in order to be considered valid. That way, if a DNS provider *does* attempt to publish a different record for 30 seconds in order to get a certificate issued to them, there's a record of them having done so. Browsers and DNSSEC-enabled resolvers could then slowly transition to requiring such records when resolving domains.

Bonus if the Certificate Transparency logs for domains using DNSSEC start recording the DNS Transparency records received when resolving the domain, as well. Now that we have ACME for automated certificate issuance, imagine if we could require that a certificate also certifies the current DNSSEC record for the domain. Steps like those would make the potential misuse of DNSSEC by a registry or TLD visible, rather than surreptitious.

A QUIC look at HTTP/3

Posted Mar 16, 2020 7:48 UTC (Mon) by josh (subscriber, #17465) [Link]

(A quick search suggests that unfortunately, there are at least some folks in the DNS standardization process who don't understand the value provided by Certificate Transparency, and thus don't understand the value proposition that DNS Transparency would provide.)

A QUIC look at HTTP/3

Posted Mar 16, 2020 11:57 UTC (Mon) by pizza (subscriber, #46) [Link] (7 responses)

I don't understand how that is supposed to work, as it still requires that one trusts their DNS providers.

Remember that DNS (and thus, DNSSEC) record changes are driven by the zone owner, and can be done at any time, on a complete routine basis.

How exactly is a third party to know that any given DNS record for example.com changed without constant polling, and more importantly, determine if any given change is malicious or not?

...As for the DNSSEC signing keys for a given zone, sure, the update has to be pushed to the registrar, and that can be recorded in a write-only log somewhere -- But if a malicious registrar changes the signing key, it can also change everything else, including the log it publishes.

At worst, this proposal still better than what we have today -- Hostile actors have to do _more_ to compromise a certificate than the current CA model. Even in the face of malicious resolvers (compromised hostspot or national firewall) it's still a net improvement over the current status quo, which requires explicitly trusting *everyone* to not fraudulently (if not maliciously) issue certificates, in favour of only having to trust the DNS and TLD zone operators.

A QUIC look at HTTP/3

Posted Mar 16, 2020 15:03 UTC (Mon) by josh (subscriber, #17465) [Link] (2 responses)

The DNS Transparency logs would be maintained by a third party, separate from the DNS provider, just as the Certificate Transparency logs are maintained by third parties, separate from the CAs.

The logs would record every record change, or perhaps just every record change of the domain itself if the domain gets to provide its own subdomain records.

And you'd figure out if they're malicious in much the same way you do with CA transparency: if a change occurs that didn't come from the domain owner.

A QUIC look at HTTP/3

Posted Mar 16, 2020 16:23 UTC (Mon) by pizza (subscriber, #46) [Link] (1 responses)

(Let me preface this by saying that I'm not trying to be contrary; I genuinely don't understand the threat model your proposed DNS transparency thingey is meant to protect against)

> The logs would record every record change, or perhaps just every record change of the domain itself if the domain gets to provide its own subdomain records.

So.. you're advocating for a completely parallel system that's independent of the trust model of the existing one?

How is this "DNS transparency" party trustworthy where the existing DNS trust model is not?

Won't this require, in order to be effective, every client resolver to independently push the results of their lookups to this third party? Or download an ever-growing global log? And what's to prevent those from being intercepted?

> And you'd figure out if they're malicious in much the same way you do with CA transparency: if a change occurs that didn't come from the domain owner.

In the CA model, *every CA* is technically capable of issuing a valid certificate for any domain, and publishing the certificate is done by each individual service/server. Under the DNSSEC model, only the domain owner can *publish* a certificate for their domain, meaning there is no longer a distinct "issuance" step. As those third party CA issuers are no longer part of the trust model, there is no longer a need to independently audit them and what they do.

The threat model for DNSSEC is quite different, and requires a different approach -- there are only really three vectors, which I rank in their real-world relevance today:

1) Resolvers
2) Registrars (and TLD/root operators)
3) Zone owners themselves

(CAs already have all three of those threat vectors, and add a fourth)

If the zone owner/operator is compromised, then they can push "legitimate" updates and nobody will ever be the wiser.

Similarly, if the registrar, TLD, or root operator is compromised to the point where they maliciously serve different stuff for short periods of time or to specific users, we're all pretty much screwed as their trustworthiness underpins everything else.

(That said, DNSSEC's trust model only requires you to ultimately trust the root server keys. Those change very infrequently, and can be trivially pinned in client resolvers, not unlike the existing CAcert bundle..)

This leaves only (1) as a serious threat, and one that's actively in use today. DNSSEC was designed to mitigate that. Of course, it's not perfect -- Hostile resolvers just strip out DNSSEC from what they serve to their clients, as there's nothing widely deployed that "requires" DNSSEC as a prerequisite. Meanwhile, anything more elaborate than that (ie serving hostile stuff with valid DNSSEC signatures) would require compromising the client in advance (eg by substituting a different root cert bundle, or disabling use of DNSSEC validation)

(And this "DNS transparency" entity won't do anything to mitigate (1))

A QUIC look at HTTP/3

Posted Mar 17, 2020 8:04 UTC (Tue) by josh (subscriber, #17465) [Link]

See the parallel replies from roc and mjg59, which explain Certificate Transparency in more detail.

A QUIC look at HTTP/3

Posted Mar 16, 2020 20:05 UTC (Mon) by roc (subscriber, #30627) [Link] (3 responses)

Chrome requires that certificates come with signatures proving they have been registered with a trusted Certificate Transparency log. So effectively CT is mandatory if you want general users to connect to your Web site.
https://chromium.googlesource.com/chromium/src/+/master/n...

A QUIC look at HTTP/3

Posted Mar 16, 2020 21:43 UTC (Mon) by pizza (subscriber, #46) [Link] (2 responses)

That's great! I guess it's a good thing my CA automatically does that.

(Which makes me wonder, if someone were to convince my CA to issue a certificate for another domain via fraudulent means, wouldn't that still end up registered/signed with the Transparency log, and thus be considered valid by Chrome?)

Meanwhile, this still doesn't change the fact that this entire class of problem is due to the very loose coupling between independent parts/models -- ie trust, certificate publishing/distribution, and naming. If you tightly couple all three into the same system, one solely controlled by the domain owner, attacks that rely on that loose coupling go away completely.

(If I'm wrong or hopelessly naive here, please show me, preferably using small words, as I'm clearly missing something fundamental..)

Sure, this relies on trusting your domain registrar, the TLD operators, and the root server operators... but CAs already trust DNS to help them determine that you control the domain you say you do. End-users already trust DNS to point at your server before they can even retrieve that Transparency-attested certificate. If we can't trust the core DNS operators, we're already completely, utterly hosed.

A QUIC look at HTTP/3

Posted Mar 17, 2020 0:48 UTC (Tue) by mjg59 (subscriber, #23239) [Link] (1 responses)

> Which makes me wonder, if someone were to convince my CA to issue a certificate for another domain via fraudulent means, wouldn't that still end up registered/signed with the Transparency log, and thus be considered valid by Chrome?

It would, but anyone would be able to spot that that happened. CT doesn't absolutely prevent a malicious CA from doing harm, but it prevents them doing so invisibly.

Logs

Posted Mar 18, 2020 12:44 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

Mmm. This is entirely true if you're Google, and otherwise it's an ambition not a reality because the complete CT system, which closes the loop, remains a work in progress.

If you're Google you're fine because the CT policy actually in place today goes like this: At least one log you use must be Google's. Google built all the early CT logs, and were first to deploy SCT requirements in Chrome, so this isn't a deliberate ploy but it's true anyhow. This gives Google certainty that they know what's up.

But for anybody else the concern then arises, what if Google (and any other logs used) are conspiring against me?

To fix that you need to close the loop. An SCT is only a _promise_ and is not the fact of logging itself. Clients would need to (have somebody on their behalf) check the logs to see that those promises were fulfilled in a timely manner. They also need multi-perspective in order to validate that the log they're shown is the only log that exists. Otherwise log operators can bifurcate the log and show a version with a problem certificate in it to the victim, while showing only logs without that certificate to everybody else.

And this latter work is all unfinished. It's probably fine, but then we said that about a lot of things which once we had CT turned out not to be fine at all. Won't see what you didn't look for, right?

A QUIC look at HTTP/3

Posted Mar 19, 2020 4:11 UTC (Thu) by flussence (subscriber, #85566) [Link] (1 responses)

After a few minutes thinking about it, it doesn't sound *conceptually* impossible. But it's practically impossible because of the logistics and current architecture. DNS is extremely high-volume, combined with a much higher rate of churn than certificates, and geared for small writes with limited side effects.

Asking any one of the dynamic DNS providers on the net to publish transparency records (or even basic DNSSEC ones for that matter) when they're hosting over a million subdomains isn't going to fly any time soon. Sometimes a 30 second TTL is a legitimate use case.

A QUIC look at HTTP/3

Posted Mar 19, 2020 19:11 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

I would guess that you can set up a CT-like system for the DNSSEC public keys for domains. Something like: "*.somedomain.com -> pubkey".

This way if CIA comes a-knocking to the DNS registrar to impersonate "joe.somedomain.com", they would have to publish a new record with CIA's pubkey.

DNSSEC keys don't change very often, so the rate of change would be manageable.

A QUIC look at HTTP/3

Posted Mar 15, 2020 21:33 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

You have to trust your DNS provider anyway, because they can impersonate you and get a regular TLS certificate (perhaps using ACME).

A QUIC look at HTTP/3

Posted Mar 16, 2020 7:54 UTC (Mon) by josh (subscriber, #17465) [Link]

See the other subthread here regarding "DNS Transparency", which could combat that.

A QUIC look at HTTP/3

Posted Mar 15, 2020 19:17 UTC (Sun) by flussence (subscriber, #85566) [Link]

I've been waiting for DANE support in anything for the better part of a decade… the CA/B oligarchy isn't going to let go of its grip on the internet that easily.

A QUIC look at HTTP/3

Posted Mar 17, 2020 3:27 UTC (Tue) by aszs (subscriber, #50252) [Link] (3 responses)

Pretty sure that's not gonna happen... here's a presentation by one of the authors of that draft explaining why: https://indico.dns-oarc.net/event/31/contributions/707/at...

A QUIC look at HTTP/3

Posted Mar 17, 2020 3:34 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

DANE by itself is 0xDEAD, it's gone to meet its maker, it's pining for fjords, it's pushing up the daises.

However, there is an interesting alternative proposal. With DANE clients use DNS (secured with DNSSEC) to validate the server's certificate. The problem is that DNSSEC doesn't really work in the current Internet for a variety of reasons.

The DNSSEC chain extension proposal would simply include the full DNSSEC-validated reply chain in the TLS connection itself. So the client can validate it completely locally, given only the root signing key (which basically becomes the ultimate CA).

A QUIC look at HTTP/3

Posted Mar 17, 2020 3:43 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Oh, and the chain is communicated through a TLS connection, so there's no chance of meddling middleboxes tampering with it.

A QUIC look at HTTP/3

Posted Mar 17, 2020 12:05 UTC (Tue) by pizza (subscriber, #46) [Link]

> The problem is that DNSSEC doesn't really work in the current Internet for a variety of reasons.

Unfortunately that argument pretty much applies to every proposal that represents more than minute changes to the status quo.

A QUIC look at HTTP/3

Posted Mar 15, 2020 4:32 UTC (Sun) by shorne (guest, #110879) [Link] (2 responses)

Is udp fragmentation offloading still a thing? I just read something saying it was deprecated.

https://www.kernel.org/doc/Documentation/networking/segme...

We still heavily depend on tcp segmentation offloading to reduce kernel/interrupt overhead. I wonder what will happen to this with QUIC.

A QUIC look at HTTP/3

Posted Mar 15, 2020 6:06 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

UDP fragmentation doesn't exist. Instead UDP relies on IP-level fragmentation and reassembly, which only is supposed to be used in IPv4. IPv6 has only end-node fragmentation. But in reality fragmentation simply doesn't work at all. Many firewalls simply discard the fragments.

> We still heavily depend on tcp segmentation offloading to reduce kernel/interrupt overhead. I wonder what will happen to this with QUIC.
This would require TLS implementation in the kernel. Which is not actually that scary with TLS 1.3, because all the complicated parts can be easily farmed out to the userspace.

A QUIC look at HTTP/3

Posted Mar 16, 2020 15:04 UTC (Mon) by willemb (subscriber, #73364) [Link]

Have a look at udp segmentation offload with UDP_SEGMENT.

UFO creates IP fragments, and is indeed deprecated.

UDP GSO instead sends regular UDP datagrams on the wire. Possibly many, from a single sendmsg call.

Unlike sendmmsg, the multiple datagrams traverse the protocol stack at once.

http://patchwork.ozlabs.org/patch/905290/
http://vger.kernel.org/lpc_net2018_talks/willemdebruijn-l...

Some devices can already offload this segmentation to hardware. Showing 3x lower cyc/B in one experiment.

https://netdevconf.info/0x12/session.html?udp-segmentatio...

One work in progress item is integrating pacing offload with SO_TXTIME to be able to build larger UDP GSO packets when serving slow clients while avoiding bursts.

A QUIC equivalent to TLS offload is indeed also clearly future work. At least framing will be simpler than TCP.

A QUIC look at HTTP/3

Posted Mar 17, 2020 7:28 UTC (Tue) by marcH (subscriber, #57642) [Link] (1 responses)

> There will be a few challenges. One difficulty may be that many companies block UDP by default as a way of blocking distributed denial-of-service attacks. With UDP, 3-7% of connections will fail due to blocking somewhere in the network.

How's that different from the "middlebox" issues mentioned earlier (fast open, brotli compression,...) and...

> Clients need to have fallback algorithms and use them transparently.

... why couldn't a fallback "solve" these earlier middlebox issues too?

> That leads to another problem: there will be no incentive to unblock UDP because the fallback will be in place.

Except for the incentive of getting better performance, which... is the goal in the first place - and the reason why people tend to upgrade their Wifi routers and other network gear anyway.

A QUIC look at HTTP/3

Posted Mar 26, 2020 18:27 UTC (Thu) by HelloWorld (guest, #56129) [Link]

> How's that different from the "middlebox" issues mentioned earlier (fast open, brotli compression,...) and...
When UDP is blocked, you'll notice and you can fall back to an older HTTP version. A broken middlebox may break things in much more subtle ways that are hard for a client to detect.

A QUIC look at HTTP/3

Posted Mar 19, 2020 7:20 UTC (Thu) by bagder (guest, #38414) [Link] (1 responses)

(I'm Daniel, I did the talk)

I think the article reflects what I said pretty good. A few minor nits:
- as Alessandro points out, the quiche link is wrong.
- the chrome command line flag is wrong (should now be --h3-27 if you use a current Canary).
- the 70% 0-RTT number Google has reported wasn't because the connections were already there, but because connections had been established prior (they had *been* there), thus fulfilling the requirements for doing 0-RTT.

Thanks!

A QUIC look at HTTP/3

Posted Mar 19, 2020 21:40 UTC (Thu) by jake (editor, #205) [Link]

> A few minor nits

Thanks for the info ... I have updated the article ...

jake

A QUIC look at HTTP/3

Posted Mar 19, 2020 20:52 UTC (Thu) by krizhanovsky (guest, #112444) [Link]

The high CPU consumption by QUIC is quite disappointing, so I believe the community can benefit a lot from the Linux kernel QUIC implementation. Not only better performance, but also having QUIC out of the box for any Linux user would be beneficial.

Our project requires full in-kernel QUIC and HTTP/3 implementation, https://github.com/tempesta-tech/tempesta/issues/724 .

At the moment we do everything in our own patches for the Linux kernel, but we'd love to go upstream. We do have fully functional in-kernel TLS implementation and we're going to propose it for the upstream on the upcoming Netdev conference https://netdevconf.info/0x14/session.html?talk-performanc... .

We'd love to hear from the people interested in in-kernel implementations of TLS and QUIC to discuss the system calls API, the minimum functionality for the first path sets (handshakes, supported algorithms and so on), and typical use cases.

Going upstream is quite a hard work, so we want to spend our resources to the things which are really wished by the community.


Copyright © 2020, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds