HTTP/2 Theory and Practice in NGINX Stable, Part 2

by

This post is adapted from a presentation at nginx.conf 2016 by Nathan Moore of StackPath. This is the second of three parts of the adaptation. In Part 1, Nathan described SPDY and HTTP/2, proxying under HTTP/2, HTTP/2’s key features and requirements, and NPN and ALPN. In this part, Nathan talks about implementing HTTP/2 with NGINX, running benchmarks, and more. Part 3 includes the conclusions and a Q&A.

You can view the complete presentation on YouTube.

Table of Contents

  Part 1
15:05 NGINX Particulars – Practice
16:36 Mild Digression – HTTP/2 in Practice
17:35 NGINX Stable: Build From Scratch
17:56 Benchmarking Warnings!
18:35 Sample Configuration
18:47 HTTP/2 Benchmarking Tools
19:11 HTTP/2 Benchmark
19:58 HTTP/1.1 with TLS 1.2 Benchmark
20:51 HTTP/1.1 with No Encryption Benchmark
21:42 Thoughts on Benchmark Results
  Part 3

15:05 NGINX Particulars – Practice

That [Part 1] was enough theory…what about the practice? When we go through to actually implement HTTP/2, especially inside of NGINX, what are the implementation particulars? What are the gotchas? What’s actually going on?

SPDY support is now gone. This is not intrinsically a bad thing because Google itself deprecated SPDY support as of May. So all the newer builds of Chrome don’t do SPDY anyway. It’s not that big of a deal.

There are other H2 implementations with which SPDY can coexist. Again, it comes down to the NPN and ALPN thing – you can negotiate to the other protocols, but NGINX doesn’t. That’s fine.

With H2 enabled on an IP, it’s enabled for all server boxes assigned to that IP, which from our perspective with our dedicated IP service was a big deal. That’s how we can control whether we allow or disallow H2 access, assuming you have a dedicated IP. If you use a shared IP with us and use a shared cert, too bad, you’re getting H2 whether you like it or not.

So NGINX 1.10, the mainline, does not currently support H2 Push. It might in the future – bug the NGINX guys – but currently it doesn’t.

Editor – HTTP/2 Server Push was introduced in NGINX Open Source 1.13.9 and NGINX Plus Release 15 a few months after Nathan made this presentation.

And what is most interesting here is the fact that mod_proxy [ngx_http_proxy_module] does not support H2 [to upstream servers]. Valentin made the point in his talk yesterday, [and it] is an excellent, excellent point, that it may not necessarily ever support H2 because that may not necessarily make sense. So just bear that in mind.

16:36 Mild Digression – HTTP/2 in Practice

Real quick digression: …how do I know that if I want H2, I’m actually getting it? How do I test for it? If you want H2 support, you require a newer version of curl (7.34 or newer), and once again, build it against the newer OpenSSL, otherwise it can’t do ALPN negotiation.

If you’re using a web browser like Chrome, if you use the developer tools, that also displays which protocol…it has negotiated. So it’ll tell you if it’s got H2.

And of course NGINX itself will log this. If you look at the common log format, sure enough – and I have it highlighted in blue – it actually tells you yes, I did get an H2 connection and I wrote it down right here.

Another side note: what happens if you cannot negotiate an H2 connection? Well, you downgrade gracefully to H1. Once again, you’re not allowed to break the Web.

This has to have some method of interoperability. So the failure mode of H2 is that you just downgrade. Sure, it may take a little bit longer, but…it still works. You still get your content. It does not break.

17:35 NGINX Stable: Build from Scratch

I’m going to show some results of doing some very, very basic, simple benchmarking. It’s sort of comedically simple and it’s here to make a big point. I’m just showing you what it is that I’ve done. Enable SSL, enable the http_v2 module. Fine.

17:56 Benchmarking Warnings!

So this little benchmark is run on this little Macbook Air. If your app has so few users and so little traffic that you can actually host it on a Macbook Air, then this benchmark probably does not apply to you. I did this deliberately.

I want to emphasize that this does not simulate your app. It does not simulate a real‑world production environment. It’s very deliberately a limited environment, and I promise the point that I’m trying to make…is going to pop out and be really clear once we actually start looking at some of the results.

And it’s only going to take another second. I just want to emphasize again: when you go and do this sort of benchmarking yourself, chances are you’re going to get a different result because you’re going to do it on a real server, not on your little Macbook Air.

18:35 Sample Configuration

I’ve configured three little server blocks: one is unencrypted H1, one is encrypted H1 using TLS 1.2, and one is H2, right?

18:47 HTTP/2 Benchmarking Tools

The benchmark I’m using, real simple: h2load. It comes out of the nghttp2 project. It’s really simple.

All it’s doing is opening up a connection, making a request, in this case for a 100‑byte object – again, stupid‑simple. It asks for it as fast as it possibly can and…measures…how many did I get back within a given period of time? In my case, I’m just asking for 50,000 of the exact same object one after the other.

19:11 HTTP/2 Benchmark

What does H2 [performance] look like…[in this] deliberately limited benchmarking environment? This is not a real‑world test. But I want to point out: to do 50,000 objects finished in 3.67 seconds. I’m pushing 13,600‑odd requests per second.

If you look at the mean time for requests, 68 microseconds, it’s okay, it’s not too shabby. And what I want to point out here is that in this benchmark, it says its space savings [on headers] is 36 percent. Again, you won’t see this in the real world.

This is because I have a 100‑byte object. So I’m able to compress my headers down – my headers are actually a very reasonable percentage of the total object size – and that’s why that number is so high. But it’s here to show that yes, the header compression really works. It really is doing something.

19:58 HTTP/1.1 with TLS 1.2 Benchmark

So, drumroll please. What if I only want to hit it with HTTP/1.1 (again, using TLS 1.2) – exact same setup, exact same object, exact same little testing environment? I finished in 2.97 seconds. Well, that’s weird.

If you look at the mean time, it’s down to about 58 microseconds per request as opposed to 68. And I can push 16.8 thousand requests per second versus the 13.6 thousand I was doing in the last benchmark.

So Valentin’s point was spot on. H2 actually does have additional overhead, especially on the server side, and that can slow you down. It really is there and we can see it.

This is why I wanted this limited benchmark set, because now it’s comedically obvious that there really is a difference there. So you have to be careful. This is not a magic make‑it‑go‑faster button.

20:51 HTTP/1.1 with No Encryption Benchmark

But it gets a little bit more interesting [when we] take a look at what unencrypted does. If I make a plain old H1 unencrypted connection, now I complete in 2.3 seconds. I’m pushing 21.7 thousand requests per second instead of 16.8 or 13.6 thousand.

So not only does H2 have an overhead, SSL also has a major, major overhead that can be very, very substantive. Look: now my mean time for requests is down to 45 microseconds, right?

And you have to be very, very careful for your use case that…you need the de facto requirement of H2, which is SSL. You need to know that you can utilize the actual use cases that H2 provides, that header compression makes sense for you, that you can handle the interleaved requests.

21:42 Thoughts on Benchmark Results

And you can’t just assume that everything is magically going to go faster just because you enabled it. So you really, really, really have to keep in mind that there is a cost associated with this. You better make sure the benefit that you need is there in order to justify doing it.

Which is why at little MaxCDN [which was acquired by StackPath in July 2016], we’re very careful to make sure that this is a user‑chooseable option, and we trust the end user to be smart enough and clever enough to have done their own benchmarking to know whether or not it actually works for them. It affects both throughputs and latencies.

This post is adapted from a presentation at nginx.conf 2016 by Nathan Moore of StackPath. This is the second of three parts of the adaptation. In Part 1, Nathan described SPDY and HTTP/2, proxying under HTTP/2, HTTP/2’s key features and requirements, and NPN and ALPN. In this part, Nathan talks about implementing HTTP/2 with NGINX, running benchmarks, and more. Part 3 includes the conclusions and a Q&A.

You can view the complete presentation on YouTube.