The Practical Question
Most teams are not deciding whether they like binary frames more than text. They are deciding whether old HTTP/1.1 performance habits still make sense.
The short answer is usually no.
The Core Difference
HTTP/1.1 and HTTP/2 keep the same HTTP semantics: methods, headers, status codes, and bodies still mean the same things. The big change is in how those messages move over the connection.
HTTP/1.1 uses a text-based protocol. Each request/response is a sequence of human-readable lines. Connections are sequential — one request at a time per connection (or pipelined, but pipelining is rarely used due to head-of-line blocking).
HTTP/2 uses a binary framing layer. Messages are split into frames and multiplexed over a single connection. Multiple requests and responses can be in-flight simultaneously without blocking each other.
Key Differences
| Feature | HTTP/1.1 | HTTP/2 |
|---|---|---|
| Protocol format | Text | Binary frames |
| Multiplexing | No (one request per connection) | Yes (multiple streams) |
| Header compression | None (plain text) | HPACK |
| Server push | No | Yes (deprecated in practice) |
| Connection reuse | Keep-Alive (limited) | Single connection per origin |
| TLS required | No | No (spec), Yes (browsers) |
| Head-of-line blocking | HTTP + TCP level | TCP level only |
Multiplexing
The biggest practical improvement in HTTP/2 is multiplexing. In HTTP/1.1, browsers open 6–8 parallel TCP connections per origin to work around the one-request-per-connection limit. Each connection has its own TLS handshake overhead.
HTTP/2 opens a single connection per origin and sends all requests as independent streams over it. Streams are interleaved at the frame level — a slow response on stream 3 does not block stream 5.
This makes HTTP/1.1 performance tricks like domain sharding and request bundling unnecessary and counterproductive.
Header Compression
HTTP/1.1 sends full headers on every request. For a page with 50 resources, the browser sends the same Cookie, User-Agent, Accept-Encoding, and Authorization headers 50 times.
HTTP/2 HPACK compression maintains a dynamic table of previously seen headers. Repeated headers are sent as a single index byte instead of the full string. For API-heavy applications with large auth tokens, this can reduce header overhead by 80–90%.
Migration Considerations
HTTP/2 is negotiated via ALPN during the TLS handshake — no URL changes, no code changes for most applications. Clients that don’t support HTTP/2 automatically fall back to HTTP/1.1.
What to stop doing when moving to HTTP/2:
- Domain sharding (defeats connection reuse)
- Aggressive JS/CSS bundling (makes caching less granular)
- Inlining small assets (HTTP/2 handles many small files efficiently)
- Sprite sheets (same reason)
What still matters:
- Caching headers (HTTP/2 doesn’t change caching semantics)
- Compression (gzip/brotli for response bodies)
- CDN usage (HTTP/2 is most effective at the edge)
What This Means In Practice
If your frontend build strategy still reflects old HTTP/1.1 bottlenecks, HTTP/2 is often a reason to simplify rather than add more bundling and sharding tricks. The protocol upgrade does not fix bad caching or oversized assets, but it does remove some of the transport pain those old workarounds were compensating for.
HTTP/3 Preview
HTTP/3 replaces TCP with QUIC (UDP-based), eliminating TCP-level head-of-line blocking. It also has built-in TLS 1.3 and faster connection establishment (0-RTT). HTTP/2 is still the dominant version today, but HTTP/3 adoption is growing rapidly.