Improve Response Times on Your NGINX Web Server with HTTP/2, Compression and Cache Headers

Introduction

SEO is a war of attrition fought over a constantly shifting battle ground, you take what wins you can get. One constant in this is content delivery time.

  • Loading speed is now a crucial metric in SEO. Using the following changes are a big step to improve rankings.
  • With a little basic configuration you can vastly improve the performance of your server and reduce delivery times.

Here, I give an introduction to enabling HTTP/2, gzip compression and cache headers.

HTTP/2

HTTP/2 has been around since 2015, so it's a little surprising that it's not in the default configuration given the multitude of benefits over HTTP/1.1. If you're not familiar with the difference between the protocol versions, here's an abstract from the HTTP/2 RFC:

HTTP/1.0 allowed only one request to be outstanding at a time on a given TCP connection. HTTP/1.1 added request pipelining, but this only partially addressed request concurrency and still suffers from head-of-line blocking. Therefore, HTTP/1.0 and HTTP/1.1 clients that need to make many requests use multiple connections to a server in order to achieve concurrency and thereby reduce latency.

Furthermore, HTTP header fields are often repetitive and verbose, causing unnecessary network traffic as well as causing the initial TCP congestion window to quickly fill. This can result in excessive latency when multiple requests are made on a new TCP connection.

HTTP/2 addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection. Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving performance.

TL:DR? In a nutshell, the network benefits:

  • V1 - single connection, all requests have to wait for the one in front to complete
  • V1.1 - each request opens a new connection - high network overload
  • V2 - full multiplexing of multiple requests and responses over a single connection, compression of headers and enabling prioritized requests so the most important are given priority.

Now for the good news? It just takes adding a single word to your server config file.

For the following, I'm presuming you're only serving HTTPS. If not, you really should be. It's free and easy using certbot if that's the problem. Regardless, the instructions below are for HTTPS:

$ sudo nano /etc/nginx/sites-available/servername

In your server definition, modify the the listen lines to include the http2 keyword:

listen [::]:443 ssl http2 ipv6only=on; 
listen 443 ssl http2;

Restart NGINX (sudo systemctl restart nginx) and that's all there is to it, you're now running on HTTP/2.

Use a CURL command to confirm this (curl -I https://your-website.com/).

Configure Compression

gzip

All major browsers accept content compressed with gzip, and it's the easiest to set up since it's already included with NGINX. If you're adding Brotli, you'll still need gzip as a fallback as older browsers are not compatible with Brotli.

The default config for NGINX is to only compress html leaving your css, javascript etc untouched.

You can test this with a simple CURL:

curl -H "Accept-Encoding: gzip" -I https://your-site/your-stylesheet.css

In the header response, you will likely see Accept-Ranges: bytes
If gzip compression is already enabled for css, you would see Content-Encoding: gzip instead.

If it's not enabled, head into the NGINX config file:

$ sudo nano /etc/nginx/nginx.conf

Scroll down to the gzip setting, which will look something like the following on default:

. . .
##
# `gzip` Settings
#
#
gzip on;
gzip_disable "msie6";

# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
. . .

So, while gzip is enabled by default, pretty much everything is turned off.

Start off by:

  1. Enabling the all commented out settings (i.e. delete the '#')
  2. Add a directive to ignore files smaller than 256 bytes (gzip_min_length 256;) - lighthouse only checks larger than 1.4kB so you can increase this up to 1433 and still pass those tests
  3. Add some extra gzip_types for web fonts, icons, XML feeds, JSON structured data, and SVG images.

Once done, your settings will look like the following:

. . .
##
# `gzip` Settings
#
#
gzip on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
  application/atom+xml
  application/geo+json
  application/javascript
  application/x-javascript
  application/json
  application/ld+json
  application/manifest+json
  application/rdf+xml
  application/rss+xml
  application/xhtml+xml
  application/xml
  font/eot
  font/otf
  font/ttf
  image/svg+xml
  text/css
  text/javascript
  text/plain
  text/xml;
. . .

Restart NGINX again and confirm the CURL command now returns Content-Encoding: gzip for your css file (or any of the other file types we've just included in the list above). Try a .jpg image and it will still show uncompressed (since it's not included as .jpg files are an already compressed file format).

That's it for gzip. Just enabling this was enough to change a lighthouse performance score from 45% to 96%.

Brotli

As mentioned before, while major browsers have accepted Brotli for a few years now, it's not so universally accepted unless you're on the latest builds (Android browser only came on board in November 2021, which is surprising considering Brotli is a Google algorithm).

The advantage of Brotli is that it offers greater compression and faster pack/unpack times than gzip.

The disadvantage is that it's not part of NGINX yet (as of v1.20.0) and requires downloading the source files for both NGINX & Brotli from github, compiling them and installing that to your server.

You'd have to weigh up whether the extra performance you'll squeeze out of Brotli over gzip is worth the process and the subsequent maintenance (you'll need to repeat the process every time you update NGINX).

If you're still keen, you can find the process here.

Enabling Browser Caching

Browser caching tells the browser that it can reuse local versions of downloaded files instead of requesting the server for them again and again. To do this, you must introduce new HTTP response headers that tell the browser how to behave. Nginx’s header module can help you accomplish browser caching.

Note: this is not to be confused with content caching which you might do in your content delivery platform.

Open a page on your site, and on the network tab of dev tools select a resource being served from your NGINX such as a .jpg image. In the header, you'll see a line showing the etag header: etag: "61b7865f-5534". The value is a unique identifier for the particular version of that file.

By default, browsers will ask the server if the etag for a previously loaded file has changed, and if it can re-use it again.

You can see this by using the etag value you just found in a curl command:

$ curl -I -H 'If-None-Match: "61b7865f-5534"' https://resource-url

HTTP/1.1 304 Not Modified
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 14 Dec 2021 10:24:59 GMT
Last-Modified: Mon, 13 Dec 2021 16:33:06 GMT
Connection: keep-alive
ETag: "61b7865f-5534"

Nginx responds with 304 Not Modified, it won’t send the file over the network again

This is good in that it won't re-send the file, but it still involves request-wait-response for every resource to be loaded. For files that may change on the back-end, this is desired, but for other resources that are static, you can bypass this.

By enabling Cache-Control and Expires headers, the browser will cache some files locally without explicitly asking the server if its fine to do so.

Fortunately, the header module is a core part of NGINX and doesn't need installing.

To add it to your site, open your site's config file:

$ sudo nano /etc/nginx/sites-available/your-server

Define your policy at the top of the file and add an expires directive to your server definition:

# Expires map
map $sent_http_content_type $expires {
    default                    off;
    text/html                  epoch;
    text/css                   max;
    application/javascript     max;
    application/octet-stream   max;
    ~image/                    max;
    ~font/                     max;
}

server {
    ...
    expires $expires;
    ...
  • The default value is set to off, which will not add any caching control headers. It’s a safe bet for the content, we have no particular requirements on how the cache should work.
  • For text/html, the value is set to epoch. This is a special value that results explicitly in no caching, which forces the browser to always ask if the website itself is up to date.
  • For text/css, application/javascript and application/octet-stream, the value has been set to max. This means the browser will cache these files for as long as possible, reducing the number of requests considerably given that there are typically many of these files.
  • The last two settings are for ~image/ and ~font/, which are regular expressions that will match all file types containing image/ or font/ in their MIME type name (like image/jpg, image/png or font/woff2). Like stylesheets, both pictures and web fonts on websites can be safely cached to speed up page-loading times, so this is also set to max.

The above is an example to work off for a caching policy more suited to your site.

More file types that you might want to include can be found here.

The NGINX Headers Module documentation gives more details on the expires field where you can set other values in days, hours or other variables.

Restart NGINX and test the responses for various resources on your site:

$ curl -I https://your-site/some-page.html
...
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
...

$ curl -I https://your-site/some-image.jpg
...
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
...
 

Conclusion

Page responsiveness is not only a crucial factor in search engine rankings but also in usability which in turn leads on to greater engagement (and hopefully conversions).

  • Enabling HTTP/2 will increase request/response times and allow higher priority resources to be delivered first.
  • Adding gzip compression will greatly reduce bandwidth and delivery time. Adding Brotoli compression takes a considerable more work but may be worth it where compression will greatly reduce your network traffic.
  • Configuring Cache-Control & Expire headers will greatly reduce the number of requests your server needs to deal with (and so speed content delivery).
 


Thoughts, comments, questions welcome below, or please feel free to contact me directly.

 
Comments
Sign In to leave a comment.