Note: While written for SPDY much of this article also applies to HTTP/2. Regardless this article is now considered legacy.
What is Google's SPDY protocol and what does it mean for me and my website?
Google’s SPDY is a replacement for HTTP and is a candidate for HTTP 2.0 operating over SSL (HTTPS). Google proposed this new protocol as part of its efforts to "make the web faster." It has been suggested as a protocol to use this environment and the future more effectively Internet by addressing the drawbacks of HTTP designed in the early Internet environment.
The SPDY protocol is enabled by default on all HTTPS ports at X4B. This protocol is backwards compatible where not supported and offers no functional difference in respect to the operation of a website or service. Your backend web server does not require support for SPDY. In this article will be a brief introduction to the characteristics and merits of SPDY is provided for beginners. I'll explain on the state of support SPDY, and what to do and what to consider when it is introduced.
Two years ago, a news report announced that Facebook plans to support Google's SPDY protocol widely, and they are already implementing SPDY / v2. Among various efforts designed and offered by Google to make the web faster SPDY is a strong candidate to become a new industry standard and be included in HTTP / 2.0.
SPDY is similar to a wrapper around HTTP. It is a packet (frame) oriented binary protocol, usually wrapped in TLS (SSL), and as such a little harder to follow than HTTP. The less secure systems out there with unlimited access, is coming to an end. In exchange, we get faster loading applications that are secure by default.
According to Patrick McManus at Mozilla: “The most important goal of SPDY is to transport web content using fewer TCP connections. It does this by multiplexing large numbers of transactions onto one TLS connection.”
SPDY should require no changes to a web application, in the same way that you can usually ignore whether your website or application is being accessed over HTTPS or HTTP. Only the web browser and web server need to know. However, if you’re ever using software such Telnet or WireShark to debug your site, you may want to understand SPDY.
SPDY design and features
The usual HTTP GET and POST message formats remain basically the same; however, SPDY specifies a new framing format for encoding and transmitting the data over the wire. This forms a session layer atop of SSL that allows for multiple concurrent, interleaved streams over a single TCP connection as well as many other features.
SPDY aims to achieve lower latency through basic (always enabled) and advanced (optionally enabled) features (such as server initiated streams / server push and hits).
SPDY allows simultaneous streams (i.e. multiple requests) over a single TCP connection at the same time. Because requests are interleaved on a single channel at the same time overall efficiency is much higher and delays due to opening TCP connections lower. Network efficiency is also higher as less, but denser, the packets are transmitted. Particularly this benefits mobile networks where latency can be quite high.
Although unlimited parallel streams solve the serialization problem, they introduce another. Bandwidth on the channel is likely limited, the client would likely need to block requests or risk clogging the channel with low priority requests. To overcome this problem, implements SPDY application priorities. That is the client can request as many articles as he wants from the server and assign a priority to each request. This prevents the network channel to be cluttered with noncritical resources (i.e background images) when a high priority request (i.e page html) is being processed.
HTTP header compression
SPDY compresses both the request and response HTTP headers, resulting in fewer packets and fewer bytes transmitted compared with plain HTTP which only compresses the content body (if applicable).
In addition, SPDY provides an advanced feature, server-initiated streams. Server-initiated streams can be used to deliver content to the client without the client needing to ask for it. This option is configurable by the web developer in two ways:
SPDY experiments with an option for servers to push data to clients via the X-Associated-Content header. This header informs the client that the server is pushing a resource to the client before the client has asked for it. For initial-page downloads (e.g. the first time a user visits a site), this can vastly enhance the user experience.
Instead of automatically pushing resources to the client, the server uses the header X-Subresources suggest the client to request specific resources, where the server knows the client's claims that these resources will be needed. However, the server will always wait for the client request before sending the content. Over slow links, this option can reduce the time it takes for a customer to find a resource needed by hundreds of milliseconds, and perhaps better for non-initial loads pages.
SPDY frequently asked questions
Q: Doesn't HTTP pipelining already solve the latency problem?
A: No. While pipelining does allow for multiple requests to be sent in parallel over a single TCP stream, it is still but a single stream. Any delays in the processing of anything in the stream (either a long request at the head-of-line or packet loss) will delay the entire stream. Additionally pipelining has proven difficult to deploy, and because of this remains disabled by default in all of the major browsers.
Q: Is SPDY a replacement for HTTP?
A: No. SPDY replaces some parts of HTTP, but mostly augments it. At the highest level of the application layer, the request-response protocol remains the same. SPDY still uses HTTP methods, headers, and other semantics. But SPDY overrides other parts of the protocol, such as connection management and data transfer formats.
Q: Should SPDY change the transport layer?
A: More research should be done to determine if an alternate transport could reduce latency. However, replacing the transport is a complicated endeavor, and if we can overcome the inefficiencies of TCP and HTTP at the application layer, it is simpler to deploy. For more information see the Google QUIC protocol, we do not support this currently at X4B (too new / experimental).
Q: TCP has been time-tested to avoid congestion and network collapse. Will SPDY break the Internet?
A: No. SPDY runs on top of TCP, and benefits from all of TCP's congestion control algorithms. Furthermore, HTTP has already changed the way congestion control works on the Internet. For example, HTTP clients today open up to 6 concurrent connections to a single server; at the same time, some HTTP servers have increased the initial congestion window to 4 packets (TCP Fast Open). Because TCP independently throttles each connection, servers are effectively sending up to 24 packets in an initial burst. The multiple connections side-step TCP's slow-start. SPDY, by contrast, implements multiple streams over a single connection.
Q: What about SCTP?
A: SCTP is an interesting potential alternative to the TCP transport, which offers multiple streams over a single connection. However, again, it requires changing the transport stack, which will make it very difficult to deploy across existing home routers. Also, SCTP alone isn't the silver bullet; application-layer changes still need to be made to efficiently use the channel between the server and client.
Q: What about other protocols and HTTP/2.0 candidates?
A: Other protocols (such as BEEP) exist and are interesting protocols in their own right. However Google’s SPDY features widespread browser support, popular adoption by the like of Facebook and is currently the strongest candidate for HTTP/2.0.
More information on SPDY can be found in the Google SPDY Whitepaper. If you have any questions regarding the state of support for SPDY, feel free to open a support ticket.