Improve Response Times and SSL Security on Your NGINX Web Server

Introduction

SEO is a war of attrition fought over a constantly shifting battle ground, you take what wins you can get. One constant in this is content delivery time.

  • Loading speed is now a crucial metric in SEO. Using the following changes are a big step to improve rankings.
  • With a little basic configuration you can vastly improve the performance of your server and reduce delivery times.

Here, I give an introduction to enabling HTTP/2, gzip compression and cache headers.

While we are configuring our NGNIX server, I'll show some minor changes that will greatly improve your SSL security at the same time.

Improve Response Times

HTTP/2

Overview

HTTP/2 has been around since 2015, so it's a little surprising that it's not in the default configuration given the multitude of benefits over HTTP/1.1. If you're not familiar with the difference between the protocol versions, here's an abstract from the HTTP/2 RFC:

HTTP/1.0 allowed only one request to be outstanding at a time on a given TCP connection. HTTP/1.1 added request pipelining, but this only partially addressed request concurrency and still suffers from head-of-line blocking. Therefore, HTTP/1.0 and HTTP/1.1 clients that need to make many requests use multiple connections to a server in order to achieve concurrency and thereby reduce latency.

Furthermore, HTTP header fields are often repetitive and verbose, causing unnecessary network traffic as well as causing the initial TCP congestion window to quickly fill. This can result in excessive latency when multiple requests are made on a new TCP connection.

HTTP/2 addresses these issues by defining an optimized mapping of HTTP's semantics to an underlying connection. Specifically, it allows interleaving of request and response messages on the same connection and uses an efficient coding for HTTP header fields. It also allows prioritization of requests, letting more important requests complete more quickly, further improving performance.

TL:DR? In a nutshell, the network benefits:

  • V1 - single connection, all requests have to wait for the one in front to complete
  • V1.1 - each request opens a new connection - high network overload
  • V2 - full multiplexing of multiple requests and responses over a single connection, compression of headers and enabling prioritized requests so the most important are given priority.

Now for the good news? It just takes adding a single word to your server config file.

Enable HTTP/2

For the following, I'm presuming you're only serving HTTPS. If not, you really should be. It's free and easy using certbot if that's the problem. Regardless, the instructions below are for HTTPS:

Copy
$ sudo nano /etc/nginx/sites-available/servername

In your server definition, modify the the listen lines to include the http2 keyword:

Copy
listen [::]:443 ssl http2 ipv6only=on; 
listen 443 ssl http2;

Verify that HTTP/2 is Enabled

Whenever making changes to NGINX, it's a good idea to validate the new configuration before reloading the server:

Copy
$ sudo nginx -t

Once you've confirmed the configuration, reload the server with:

Copy
$ sudo systemctl reload nginx

Use CURL to confirm that you're now running HTTP/2:

Copy
$ curl -I -L https://your-website.com/
HTTP/2 200
server: nginx/1.18.0 (Ubuntu)
content-type: text/html; charset=utf-8
content-length: 46408
vary: Accept-Encoding
content-language: en
x-content-type-options: nosniff
referrer-policy: same-origin
x-frame-options: DENY
vary: Cookie

The first line of the response should confirm you're now serving HTTP/2, as above.

Impotant

If your client doesn't support HTTP/2, it will default back to HTTP/1.1. This will happen if you run CURL from a Windows Command Prompt or Powershell. Running the command on the same machine from a git bash shell will produce the correct HTTP/2 result.

Configure Compression

gzip

All major browsers accept content compressed with gzip, and it's the easiest to set up since it's already included with NGINX. If you're adding Brotli, you'll still need gzip as a fallback as older browsers are not compatible with Brotli.

The default config for NGINX is to only compress html leaving your css, javascript etc. untouched.

You can test this with a simple CURL:

Copy
$ curl -H "Accept-Encoding: gzip" -I https://your-site/your-stylesheet.css

In the header response, you will likely see Accept-Ranges: bytes
If gzip compression is already enabled for css, you would see Content-Encoding: gzip instead.

If it's not enabled, head into the NGINX config file:

Copy
$ sudo nano /etc/nginx/nginx.conf

Scroll down to the gzip setting, which will look something like the following on default:

Copy
. . .
##
# `gzip` Settings
#
#
gzip on;
gzip_disable "msie6";

# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
. . .

So, while gzip is enabled by default, pretty much everything is turned off.

Start off by:

  1. Enabling the all commented out settings (i.e. delete the '#')
  2. Add a directive to ignore files smaller than 256 bytes (gzip_min_length 256;)
    Lighthouse only checks larger than 1.4kB so you can increase this up to 1433 and still pass those tests
  3. Add some extra gzip_types for web fonts, icons, XML feeds, JSON structured data, and SVG images.

Once done, your settings will look like the following:

Copy
. . .
##
# `gzip` Settings
#
#
gzip on;
gzip_disable "msie6";

gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types
  application/atom+xml
  application/geo+json
  application/javascript
  application/x-javascript
  application/json
  application/ld+json
  application/manifest+json
  application/rdf+xml
  application/rss+xml
  application/xhtml+xml
  application/xml
  font/eot
  font/otf
  font/ttf
  image/svg+xml
  text/css
  text/javascript
  text/plain
  text/xml;
. . .

Restart NGINX again and confirm the CURL command now returns Content-Encoding: gzip for your css file (or any of the other file types we've just included in the list above). Try a .jpg image and it will still show uncompressed (since it's not included as .jpg files are an already compressed file format).

That's it for gzip. Just enabling this was enough to change a lighthouse performance score from 45% to 96%.

Brotli

As mentioned before, while major browsers have accepted Brotli for a few years now, it's not so universally accepted unless you're on the latest builds (Android browser only came on board in November 2021, which is surprising considering Brotli is a Google algorithm).

The advantage of Brotli is that it offers greater compression and faster pack/unpack times than gzip.

The disadvantage is that it's not part of NGINX yet (as of v1.20.0) and requires downloading the source files for both NGINX & Brotli from github, compiling them and installing that to your server.

You'd have to weigh up whether the extra performance you'll squeeze out of Brotli over gzip is worth the process and the subsequent maintenance (you'll need to repeat the process every time you update NGINX).

If you're still keen, you can find the process here.

Enable Browser Caching

Browser caching tells the browser that it can reuse local versions of downloaded files instead of requesting the server for them again and again. To do this, you must introduce new HTTP response headers that tell the browser how to behave. Nginx’s header module can help you accomplish browser caching.

fa-solid fa-triangle-exclamation fa-xl Important
This is not to be confused with content caching which you might do in your content delivery platform.

Open a page on your site, and on the network tab of dev tools select a resource being served from your NGINX such as a .jpg image. In the header, you'll see a line showing the etag header, e.g. etag: "61b7865f-5534". The value is a unique identifier for the particular version of that file.

By default, browsers will ask the server if the etag for a previously loaded file has changed, and if it can re-use it again.

You can see this by using the etag value you just found in a curl command:

Copy
$ curl -I -H 'If-None-Match: "61b7865f-5534"' https://resource-url

HTTP/2 304 Not Modified
Server: nginx/1.18.0 (Ubuntu)
Date: Tue, 14 Dec 2021 10:24:59 GMT
Last-Modified: Mon, 13 Dec 2021 16:33:06 GMT
Connection: keep-alive
ETag: "61b7865f-5534"

Nginx responds with 304 Not Modified, it won’t send the file over the network again

This is good in that it won't re-send the file, but it still involves request-wait-response for every resource to be loaded. For files that may change on the back-end, this is desired, but for other resources that are static, you can bypass this.

By enabling Cache-Control and Expires headers, the browser will cache some files locally without explicitly asking the server if its fine to do so.

Fortunately, the header module is a core part of NGINX and doesn't need installing.

To add it to your site, open your site's config file:

Copy
$ sudo nano /etc/nginx/sites-available/your-server

Define your policy at the top of the file and add an expires directive to your server definition:

Copy
# Expires map
map $sent_http_content_type $expires {
    default                    off;
    text/html                  epoch;
    text/css                   max;
    application/javascript     max;
    application/octet-stream   max;
    ~image/                    max;
    ~font/                     max;
}

server {
    ...
    expires $expires;
    ...
  • The default value is set to off, which will not add any caching control headers. It’s a safe bet for the content, we have no particular requirements on how the cache should work.
  • For text/html, the value is set to epoch. This is a special value that results explicitly in no caching, which forces the browser to always ask if the website itself is up to date.
  • For text/css, application/javascript and application/octet-stream, the value has been set to max. This means the browser will cache these files for as long as possible, reducing the number of requests considerably given that there are typically many of these files.
  • The last two settings are for ~image/ and ~font/, which are regular expressions that will match all file types containing image/ or font/ in their MIME type name (like image/jpg, image/png or font/woff2). Like stylesheets, both pictures and web fonts on websites can be safely cached to speed up page-loading times, so this is also set to max.

The above is an example to work off for a caching policy more suited to your site.

More file types that you might want to include can be found here.

The NGINX Headers Module documentation gives more details on the expires field where you can set other values in days, hours or other variables.

Restart NGINX and test the responses for various resources on your site:

Copy
$ curl -I https://your-site/some-page.html
...
Expires: Thu, 01 Jan 1970 00:00:01 GMT
Cache-Control: no-cache
...

$ curl -I https://your-site/some-image.jpg
...
Expires: Thu, 31 Dec 2037 23:55:55 GMT
Cache-Control: max-age=315360000
...

Improve SSL Security

Enable HTTP Strict Transport Security (HSTS)

Even though your HTTP queries are automatically redirected to HTTPS, you can disable repeated redirection by turning on HTTP Strict Transport Security (HSTS). For a set amount of time after discovering a HSTS header, the browser will refrain from attempting another ordinary HTTP connection to the server. It will only ever exchange data via an encrypted HTTPS connection.

This header also protects us from protocol downgrade attacks.

Open the Nginx configuration file in your editor:

Copy
$ sudo nano /etc/nginx/nginx.conf

Add the following add_header line to the end of the http section to enable HSTS:

Copy
http {
...
    ##
    # Virtual Host Configs
    ##

    include /etc/nginx/conf.d/*.conf;
    include /etc/nginx/sites-enabled/*;
    add_header Strict-Transport-Security "max-age=15768000" always;
}

max-age is in seconds, the above figure is equivalent to six months.

By default, this header is not added to subdomain requests. The includeSubDomains variable should be added at the end of the line if you have subdomains and want HSTS to apply to each one of them:

Copy
add_header Strict-Transport-Security "max-age=15768000; includeSubDomains" always;

Remove Old and Insecure Cipher Suites

To ensure secure SSL/TLS configuration in NGINX, you should consider using a strong set of cipher suites.

The HTTP/2 protocol has a blacklist of old and insecure ciphers (cryptographic algorithms that describe how the transferred data should be encrypted). We'll amend the NGINX list to only those compatible with HTTP/2 using a recommended list of SSL cipher suites that provide a good balance between security and compatibility:

Third Party Certificates

Depending on how you configured your TLS/SSL certificates, normally, you will see an ssl_ciphers deifintion in your /etc/nginx/sites-available/servername config file.

Amend it to the following:

Copy
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp384r1;

This configuration prioritizes forward secrecy (using elliptic curve Diffie-Hellman for key exchange) and enables AES encryption with 256-bit keys.

fa-regular fa-pen-to-square fa-xl Note
It's worth reviewing the above list before implementing - this was the recommended list for maximum browser compatibility as at May 2023. If you run the ssltest, you'll see which browsers are needing these and decide for yourself which to keep.

The ssl_prefer_server_ciphers on; directive gives priority to the server's preferred cipher suites over the client's preferences.

ssl_ecdh_curve secp384r1; specifies the elliptic curve to use for the Diffie-Hellman key exchange. Using the secp384r1 curve provides a good balance between security and performance.

fa-solid fa-triangle-exclamation fa-xl Important:
You must match the correct curve to your SSL key. For example:
- For P‐256 keys, the ssl_ecdh_curveMUST be secp256r1
- For P‐384 keys, the ssl_ecdh_curveMUST be secp384r1
- For P‐521 keys, the ssl_ecdh_curveMUST be secp521r1
Be sure to use the correct curve for your certificate before using this directive.

Certbot

If you don't see ssl_ciphers and used Certbot to obtain your certificates, you will see the following line instead:

Copy
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot

Comment out this line and replace with the ssl_ciphers definition above.

fa-solid fa-triangle-exclamation fa-xl Warning
Do not
edit options-ssl-nginx.conf as it will break the Certbot auto-update.

Self-signed Certificates

If you used self-signed certificates or used a certificate from a third party and configured it according to the prerequisites, open the file /etc/nginx/snippets/ssl-params.conf and replace ssl_ciphers with the definition above.

TLS Session Resumption

The TLS session resumption feature typically speeds up client reconnections since a complete TLS handshake is not required. Instead, the connection's legitimacy is checked using a value remembered from a prior session. The session resumption, however, violates perfect forward secrecy if the server does not correctly cycle or refresh its secrets.

It is possible to steal active TLS sessions from other users if the TLS session resumption feature is set incorrectly.

To correctly add session resumption, include the following lines in your https server block in /etc/nginx/sites-available/servername:

Copy
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 1h;

To disable session resumption entirely (more secure), use the following instead:

Copy
ssl_session_tickets off;

Specify the SSL/TLS protocols to use

In the same sites-available config file, set the TLS versions accepted with the following:

Copy
ssl_protocols TLSv1.2 TLSv1.3;

This specifies to use TLSv1.2 and TLSv1.3, the latest secure versions. SSLv2/3 and TLSv1.0/1.1 are discontinued and should not be used.

Add X-XSS-Protection

The X-XSS header, often known as the Cross Site Scripting header, is used to prevent Cross-Site Scripting attacks. In current web browsers such as Chrome, Internet Explorer, and Safari, the XSS Filter is active by default. When reflected cross-site scripting (XSS) assaults are detected, this header prevents sites from loading.

Depending on your needs, you may install XSS security using one of three methods.

  1. X-XSS-Protection: 0: This completely disables the filter.
  2. X-XSS-Protection: 1: Enabling the filter but just sanitising possibly harmful scripts.
  3. X-XSS-Protection: 1; mode=block: Enabling the filter and blocking the page fully.

To add option 3 for example, open /etc/nginx/nginx.conf and in the http block, add the following below HSTS:

Copy
add_header X-XSS-Protection "1; mode=block";

Content Security Policy (CSP)

Content Security Policy (CSP) is a security mechanism that helps protect web applications against various types of attacks, such as cross-site scripting (XSS) and data injection attacks. When implemented with NGINX, CSP allows you to define a set of policies that dictate which types of content can be loaded and executed on your website.

By configuring CSP in NGINX, you can specify the allowed sources for content such as scripts, stylesheets, images, fonts, and more. This helps mitigate the risk of malicious content being injected into your web pages. NGINX acts as a mediator, enforcing the defined CSP rules and blocking any content that violates those rules.

While CSP can significantly enhance the security of your web application, it's important to note that configuring CSP properly can be complex. Misconfigurations can lead to unexpected consequences, such as blocking legitimate resources or rendering your website unusable.

Therefore, it's crucial to thoroughly understand the CSP specification and carefully test and validate your configuration to ensure it doesn't inadvertently impact the functionality or user experience of your website. Additionally, misconfigurations may also result in false sense of security, leaving your application vulnerable to attacks.

Here are a few potential dangers and complexities associated with CSP misconfiguration:

  • Overly restrictive policies: If your CSP rules are too strict, they may unintentionally block legitimate resources and break the functionality of your website or web application.
  • Insecure directives: Incorrectly specifying directives or allowing unsafe content sources can undermine the effectiveness of CSP, leaving your website vulnerable to attacks.
  • Lack of granularity: Defining a granular CSP policy can be challenging, especially for complex web applications with multiple components and external dependencies. Failure to account for all relevant sources may lead to bypasses or security gaps.
  • Compatibility issues: Different web browsers and versions may have varying levels of support for CSP directives. It's important to test your CSP configuration across multiple browsers to ensure consistent and effective protection.

To mitigate the risks associated with CSP misconfiguration, it's recommended to carefully study the CSP documentation, leverage CSP reporting mechanisms to monitor violations, and conduct thorough testing before deploying CSP in production. Regularly reviewing and updating your CSP policies as your website evolves is also essential to maintain an effective security posture.

A fairly comprehensive coverage for CSP can be found here. How you configure CSP (or if you do at all) will depend entirely on the nature of your site.

Remember, while CSP can be a valuable security measure, it requires careful consideration and ongoing maintenance to ensure proper configuration and protection without unintended negative consequences.

Add a Certification Authority Authorization (CAA)

Not strictly an HTTP/2 mandate, nor an NGNIX configuration, but often overlooked.

Make sure you have added a CAA record to your DNS that specifies who the authorised CA for your domain is.

If you're not sure how to set this up, you can use the CAA record generator on SSL Mate.

An example for a site with a certificate issued by CertBot would be:

Copy
example.com. IN CAA 128 issue "letsencrypt.org"

Test the SSL Configuration

The SSL Server Test provided by SSL Labs performs a comprehensive assessment of an SSL/TLS server's configuration to evaluate its security posture. The test examines various aspects, including the SSL/TLS protocol support, key exchange algorithms, cipher suites, certificate validity and strength, secure renegotiation, secure session resumption, HTTP Strict Transport Security (HSTS) headers, and server vulnerabilities. It also checks for the implementation of recommended security features and the proper configuration of server-side settings. The test results in a detailed report that assigns a grade to the server's overall security level, helping administrators identify potential weaknesses and implement necessary improvements to enhance the security and reliability of their SSL/TLS server configuration.

If everything is properly configured, you should receive an fa-solid fa-shield-halved A+ for security.

Alternatively, for a more rapid test, use the EXPERTE.com. The tests verify the validity and integrity of the SSL certificate, check for the presence of trusted certificate authorities, and assess the strength of the encryption protocols and cipher suites used. Additionally, the tests evaluate the implementation of secure HTTP headers, such as HTTP Strict Transport Security (HSTS), and scan for potential vulnerabilities or misconfigurations that could expose the website to security risks.

The report from this site is a lot easier to read than the SSL Lab test, and while not as comprehensive, it still covers all the essentials that most server administrators will need to know.

Conclusion

Page responsiveness is not only a crucial factor in search engine rankings but also in usability which in turn leads on to greater engagement (and hopefully conversions).

  • Enabling HTTP/2 will increase request/response times and allow higher priority resources to be delivered first.
  • Adding gzip compression is easy to implement and will greatly reduce bandwidth and delivery time.
  • Adding Brotoli compression takes a considerable more work but may be worth it where compression will greatly reduce your network traffic.
  • Configuring Cache-Control & Expire headers will greatly reduce the number of requests your server needs to deal with (and so speed content delivery).
  • With a few quick changes, you can improve the security of your HTTPS connection. On this page, we removed older, insecure ciphers blacklisted by the HTTP/2 protocol. We also, enabled Strict Transport Security (HSTS) and added a Certification Authority Authorisation (CAA) record to our DNS to restrict validity of SSL certificates to a named authority.

  Please feel free to leave any questions or comments below, or send me a message here