HTTP/2 Optimization

Research conducted by: Spencer Fricke, Christian Krueger, and Emmanuel Contreras Guzman at the University of Wisconsin Madison
For our last published report - note this site is more up to date than paper

The Problem:

With the push to get people on board using HTTP/2 over HTTP/1.1 there is some uncertainty about the most optimal new way to package your website. You may be aware it was a common hack to concat all your javascript into a single file, domain shard images, and many other techniques of the same nature to get better load times for your website. We are set out to find what the new generation of optimization hacks are for HTTP/2.

The Conclusions

Disclaimer: There is no "standard" website! The real world involves countless variations to websites and different client to server configuration. Theoretical values are not helpful and there is nothing consistent about internet speeds. We aimed to start with a baseline and we plan to improve this research as an ongoing project

This is the tl;dr of what we found about trying to optimize your HTTP/2 site. Our reasoning for all of these are found below.


How We Got the Results

We obtained all of our data in a 3 part system which can all be found on GitHub. This was designed to allow anyone to easily in 3 steps generate their own data as we want people to help confirm our results.

Step 1 - Generate Testing Websites
We have created a simple bash script that will go and generate various websites along different parameters. Since we care about the "transferring" of data using HTTP/2 only, we find it valid to fill a website with random data as the page's loading is independent of how files are sent across the network. The script is incredibly simple to use and more detail can be found in the website generator folder

Step 2 - Gather HTTP/2 Request Data
After various methods we found that the best way to gather data is to automated the HAR file from the browsers. This decision is made due to lack of support of headless browsers (currently) to collect the data that the network devtools offer. For Chrome we ended up using the Chrome Debugging Protocol and the NodeJS API for it and ended up grabbing the HAR file to get the data from our request. Our Headless HAR Parser takes a database and the list of sites you want to run against (one is generated automatically in the website generator). Each site it grabs and gets its HAR data where it then parses it and enters all the desired data to the database. This is designed to be run as often as you want to gather all the data needed

Step 3 - Auto Generate Results and Charts from Data
Once you have the data to make it super easy to analyze it we created a result generator that will take data from the database and create a series of Google Charts. The scripts creates each chart as its own html page which can be used to link for reference. You can also easily take the inner data section and combine as please


The Actual Data and Reasoning for Claims

Click on any graphs to see live version!

The first task we set out to find is how HTTP/2 is effected as you increase the number of files. We created websites of 1, 2 and 4 MB where it was filled with even sized javascript files. We had more sizes, but due to length of testing, we picked three common/appropriate sizes of 1,2, and 4 MB.
Here is an example of what it looks like when we have it in the Same Size Structure.


After finding our results we decided to compare it to HTTP/1.1 to help us see what was really going on. Here is a graph showing how HTTP/2 scales compared to HTTP/1.1


This graph shows a few very important point! To notice, let's next take a look at how much of a difference there is between where the graph is at its low point and at 5 files for both HTTP/1.1 and HTTP/2 respectively with a 2 MB website.




These results is how we came to our main observation that not only is HTTP/2 almost always internally optimized, but even attempting to optimize it with the classic "concat hack" gets you almost no benefit. So please don't waste your effort having gulp/grunt concat all your files for you, the caching of the sites is better in long term by far.


We also wanted to see if it made a difference in how your ordered your objects in your HTML. We tested this with only JavaScript objects to prevent the browser's implementation of priority to not affect the results.
Here is a better explanation what we are referring to by the "structure" of ordering your website


After running our test we noticed that there was difference with putting your objects in ascending or descending order. We also noticed that doubling the size of the website doubled the percent difference between the two (there is no annotated graph on this, but values can be observed in charts to validate claim).


We are not 100% sure why the gap starts out so large, evens out, and then slowly creeps away. We also notice that the big difference in the beginning is only on Apache and not Nginx. After much research and reading we came up with this explanation why descending is more optimized than ascending.


The whole idea behind one TCP streaming connection is allow the server to have the highest utilization possible by filling in whenever part of a packet is not ready to transmit by sending another file down the stream (better explanation). What we have concluded is that when you put your files in ascending order the last file will be large and there will be nothing to fill the gaps on transmission. With descending the large files right away are sent and there are more packets to send incase of an gap giving the best server utilization.


We next wanted to make sure none of the scaling of the last two observation was determined by your choice of Apache vs Nginx. We found that scaling was almost identical and that both Apache and Nginx are up to standard with HTTP/2.
(Note: we just turned on server push from the server and the charts also show that without proper configuration on the developer end, server push will not work automatically)
(BIGGER Note: we also want to retest this section due to using AWS to AWS which we saw almost zero delay in the transmission time and feel that using two machines in the same datacenter is a terrible idea for an "real world" test)


Here is a second version of the same data


We tried running a set of test with wireless to compare to wired. What we really found is that wireless internet on our laptop is pretty inconsistent and all we can really say is HTTP/2 scales in terms of file count ratio across wireless transmission.





Probably the most "interesting" thing we found is how your server will bottleneck first from lack of performance capabilities before HTTP/2 will. We noticed this when we upgraded our server from a Raspberry Pi 3 to a Medium size AWS EC2 instance.



(no chart for data)

What we are showing is that when you try to send a website of 500 or 1000 files that low powered servers just won't cut it. This is not so much a warning on how to optimize your HTTP/2 site, but helps enforce that HTTP/2 might not be the issue for a low performance load time if your website happens to be super large.

What is Next

Our BIGGEST issue is both the time and resources to test these optimization. We have given it "the ol college try" and seriously wanted to find some satisfying results. But what we really need is the open source community to help confirm our test. We really tried to design our testing so that anyone with a server and client machine can "easily" get the testing going without having to spend the many hours we had to get to the point we are even at.

Issues we ran into for people who want to help

This was a simple task that turned into a lot of wasted hours and for anyone who might care here are things to avoid for future attempts at a similar idea.