DEV Community

Steve Coffman
Steve Coffman

Posted on

HTTP/2 streaming and you, HTTP/3 streaming and wheee!

HTTP/3 is used by 30.1% of all the websites.

HTTP/2 is used by 34.7% of all the websites.

Some methodologies put these higher, but we can safely say:

A large chunk of current Internet traffic already uses HTTP/3 today.

However, this change has been a quiet revolution in our industry.

HTTP/2 (and HTTP/3) changes the rules of the game for web development!

This is a summary of a number of other articles and my own experience building a jackbox clone using HTTP/3 Server Streaming using ConnectRPC (gRPC).

People have become long accustomed to HTTP/1.1 and the best practices associated that have grown up around it, but a lot of these are worth re-evaluating now that things have changed, and it’s important to understand why.

In HTTP/1.1 downloading multiple smaller files is slower than downloading a single file of the same combined size, because of the lookups, handshakes, and “queuing process” that happens.

In HTTP/1.1 each connection is a single unary request/response cycle.

Many Best Practices under HTTP/1.1 are actually counterproductive in HTTP/2 or HTTP/3 usage. For example:

  • In HTTP/1.1 it is a best practice to open several HTTP 1.x connections in parallel to speedup the page loading (up to 6 max per domain shard).

  • In HTTP/1.1 it is a best practice to concatenate JavaScript and CSS files, sprite images and inline resources.

  • In HTTP/1.1 it is a best practice to design your APIs to batch multiple queries to minimize round trips.

In contrast, HTTP/2 (and HTTP/3) it is almost as if both gravity is reversed and all friction ceases to exist, as these well worn practices degrade performance instead of enhancing performance!


HTTP/2 TLS 1.3 hugely reduces the number of round trips required for setting up an HTTPS connection. For clients that have ever visited a site before, the data that traverses this connection is cryptographically protected, and there's no need to share key exchange protocol messages before transmission.

HTTP/2 also supports multiplexing, which allows the Browser to fire off multiple requests at once on the same connection and receive the requests back in any order.

HTTP/2 supports streams, a bidirectional flow of frames between the client and the server. A single client request can have multiple responses (e.g. subscribe to updates).

HTTP/2 is full duplex across streams, in that one stream may be sending while another is receiving. But HTTP/2 is still very much half duplex within a given stream.

HTTP/2 supports 100 max number of concurrent requests that a browser can make to a server up from 6-8.

HTTP/3 is pretty much HTTP/2 but over UDP (yeah, they also improved and simplified things like stream prioritization). This prevents TCP's Head-Of-Line blocking problem, when a queue of TCP packets is held up by the first packet in the queue..


Websockets and HTTP/2

HTTP/2 obsoletes websockets for all use cases except for pushing binary data from the server to a JS webclient. HTTP/2 fully supports binary bidi (bidirectional) streaming (read on), but browser JS doesn't have an API for consuming binary data frames and AFAIK such an API is not planned. Once the client opens a stream by sending a request, both sides can send DATA frames across a persistent socket at any time - full bidi.

For every other application of bidi streaming, HTTP/2 is as good or better than websockets, because (1) the spec does more work for you, and (2) in many cases it allows fewer TCP connections to be opened to an origin.

That's not much different from websockets: the client has to initiate a websocket upgrade request before the server can send data across, too.

The biggest difference is that, unlike websockets, HTTP/2 defines its own multiplexing semantics: how streams get identifiers and how frames carry the id of the stream they're on. HTTP/2 also defines flow control semantics for prioritizing streams. This is important in most real-world applications of bidi.

If you need to build a real-time chat app, let's say, where you need to broadcast new chat messages to all the clients in the chat room that have open connections, you can (and probably should) do this without websockets.

You would use Server-Sent Events to push messages down and the Fetch api to send requests up. Server-Sent Events (SSE) is a little-known but well supported API that exposes a message-oriented server-to-client stream. Although it doesn't look like it to the client JavaScript, under the hood your browser (if it supports HTTP/2) will reuse a single TCP connection to multiplex all of those messages. There is no efficiency loss and in fact it's a gain over websockets because all the other requests on your page are also sharing that same TCP connection. Need multiple streams? Open multiple EventSources! They'll be automatically multiplexed for you.

Besides being more resource efficient and having less initial latency than a websocket handshake, Server-Sent Events have the nice property that they automatically fall back and work over HTTP/1.1 (but the 6 connection per domain limit). But when you have an HTTP/2 connection they work incredibly well.

Here's a good article with a real-world example of accomplishing the reactively-updating SPA.

It is worth mentioning that websockets over HTTP/2 and even websockets over HTTP/3 (UDP QUIC) are also possible, and do notably improve performance.


HTTP/2 GraphQL impact is a mixed bag

With HTTP/2 (and HTTP/3) batching requests like this is now counter-productive to performance:

GET /widgets/5382894223,35223231,534232313,5231332435 HTTP/1.1
Accept: application/widgets+json
Host: api.widgets.org
Accept-Encoding: gzip, deflate
User-Agent: BobsWidgetClient/1.5
Enter fullscreen mode Exit fullscreen mode

The above approach has a number of downsides. Both the service and clients need to understand a new endpoint, and a new list-based format, bloating the API – especially if there are many different types of resources that need similar treatment. Furthermore, this approach seriously impacts cache efficiency, creating further server load and client-perceived latency.

If you can describe precisely what you want to the server in an efficient format, it can now reply with just what you want.

Without too much exaggeration, another way to think of HTTP/2 is as a new query language – one that lets you encode a very complex set of requests into a small amount of data that is heavily optimized for transmission, while still allowing standard HTTP components – especially caches – to work with the individual requests. However, this is still server-centric as it currently requires that the server backend provides endpoints that are exactly as granular as the client needs, rather than allowing the client to independently decide granularity.

By using HTTP/2, it fixes most problems caused by compound documents and sparse fieldsets based formats such as GraphQL and JSON:API:

  • Because each pushed resource is sent in a separate HTTP/2 stream (HTTP/2 multiplexing), related resources can be sent in parallel to the client.

  • Consequently, clients and network intermediates (such as a CDN or proxy), can store each resource in a specific cache, while resource embedding only allows to have the full big JSON document in cache, cache invalidation is then more efficient, and can be done at the HTTP level.

Specifically with GraphQL, using cache mechanisms provided by the HTTP protocol isn't easy (POST requests cannot be cached using just HTTP semantics).

HTTP/2 and GraphQL Federation

Apollo Federation lets you declaratively combine multiple APIs into a single, federated graph. This federated graph enables clients to interact with your APIs through a single request. That’s no longer desirable from a performance standpoint in HTTP/2 or HTTP/3. In federation, the slowest subquery becomes the bottleneck for the entire federated query. Federation also increases the difficulty of independent deployability and testability. It also makes it hard to provide the client with meaningful partial success and partial errors when subqueries return errors. It may be that there are other specific concerns — such as authentication and rate limiting — where a GraphQL gateway is still more useful than not.

Apollo Federation also does not support GraphQL Subscriptions, and they have no plans to add support

HTTP method QUERY and GraphQL

The IETF WHATWG has a draft HTTP extension that would add a new HTTP method, QUERY, as a safe, idempotent request method that can carry request content. (which would apply to HTTP/1.1, HTTP/2, and HTTP/3). This specification defines an HTTP method is safe if it doesn't alter the state of the server. In other words, a method is safe if it leads to a read-only operation.

This extension would allow complicated client-centric queries like GraphQL to avoid using POST and be able to enjoy more standard HTTP caching behavior. When combined with the other performance benefits of HTTP/2 and HTTP/3 it should open new opportunities for dynamic client interactions.

As to how active the proposal is, you can see a lot of revisions in the GitHub history of the source document here: https://github.com/httpwg/http-extensions/commits/main/draft-ietf-httpbis-safe-method-w-body.xml

And use the IETF data tracker to see the other activity on mailing lists:

https://datatracker.ietf.org/doc/draft-ietf-httpbis-safe-method-w-body/

Node 22.2.0 supports HTTP Method Query https://github.com/nodejs/node/issues/51562

GraphQL Subscriptions over HTTP/2

Summary of the Problems with GraphQL Subscriptions over WebSockets

  • WebSockets make your GraphQL Subscriptions stateful

  • WebSockets cause the browser to fall back to HTTP/1.1

  • WebSockets cause security problems by exposing Auth Tokens to the client

  • WebSockets allow for bidirectional communication, but GraphQL subscriptions do not

  • WebSockets are not ideal for SSR (server-side-rendering)

GraphQL Subscriptions over WebSockets cause a few problems with performance, security and usability.

GraphQL Subscriptions over HTTP/2 with Server Sent Events (SSE)

  • HTTP/2 SSE (Server-Sent Events) / Fetch is stateless

  • HTTP/2 SSE (Server-Sent Events) / Fetch can easily be secured

  • HTTP/2 SSE (Server-Sent Events) / Fetch disallows the client to send arbitrary data

  • HTTP/2 SSE (Server-Sent Events) / Fetch can easily be used to implement SSR (Server-Side Rendering) for GraphQL Subscriptions

Sources and further reading

Top comments (0)