Load Balancing Without Giving Away the Keys

Distributing the load of a web application or especially an API endpoint using a load balancer (LB) is highly useful: You can get better performance, make software rollouts smoother and withstand node failure.

If an application node becomes unavailable due to failure or a planned maintenance, the LB notices that it does not respond and stops send traffic to it.

The LB itself can go down, too, but since LBs can be made stateless, failover and redundancy can be achieved more easily through e.g. the usage of CARP1 or anycast. Many cloud providers offer managed load balancers.

Since HTTPS has become the standard even for non-private data transfers, the management of TLS certificates and keys needs to be considered as well. Often software operators opt to decrypt the TLS on load balancer level and send traffic via unencrypted HTTP to the nodes, compromising confidentiality.

TLS can also be handled by the nodes itself, but this bears two challenges: All the nodes need the same key and certificate and also, if you’re acquiring the certificates from Let’s Encrypt or any ACME-compatible CA, they need to serve the appropriate authorization challenge.2

For serving HTTPS directly from a Go application, there is the certmagic library, which also powers the automatic TLS management in Caddy. By default, it manages key generation, cert acquisition and wraps the HTTP server with TLS. The storage for the keys is rather flexible: Normally they’re stored on disk, but you can add a different adapter or write your own. There is a third-party S3 adapter, which allows you to store your certs/keys/challenges on Amazon S3. If you configure your nodes to use the same S3 bucket, it doesn’t matter which node will be asked by the load balancer to serve the challenge, all of them can answer, because they’re pulling the challenge via API.

Keys and certs on S3

Personally I found it unacceptable to store all the keys in plain text on some else’s hard drive. Perfect forward secrecy prevents from decoding HTTPS traffic before a breach, but with a third party service it’s impossible to determine whether such an event took place.

This is the reason why I built a custom certificate storage for certmagic, which stores the keys, certs and challenges on any S3-compatible store (e.g. Backblaze, Digital Ocean Spaces or your own Minio instance), but encrypts it using NaCl’s secret box3 before sending it off.

Keys and certs on S3

You don’t have to trust any of the provider’s claims about ”at rest“ encryption and you don’t have to implement any highly available storage yourself. You just have to spend a few pennies per month for any S3 storage, without lock-in.

This allows you to do HTTPS without having to break encryption mid-way and you can offload TLS to the nodes, instead of the LB having to do all the work.


  1. FreeBSD Manual on CARP ↩︎

  2. To be pedantic, only if you’re using either TLS-ALPN or HTTP authorization. You can also use DNS validation, if your DNS software or zone provider supports this. ↩︎

  3. More specifically, the secretbox implementation in Go ↩︎