Getting Started With DigitalOcean Load Balancers – Part 1

Getting Started With DigitalOcean Load Balancers – Part 1

Recently, DigitalOcean released a new feature in addition to its core product of droplets – load balancers. Having a large website involves unique challenges. For example, you need to ensure that your infrastructure is capable of handling sudden spikes in traffic. You also need to have redundancy in place. This means that there shouldn’t be a single point of failure that can bring down your entire site. If you have your own DNS servers for example, you should consider setting up a cluster that can handle one of them going down.

A large website is ideally served by multiple servers that receive requests based on a certain ruleset. This spreads or balances the load amongst them – and that’s where we get the name “load balancers”. Using them, you can configure multiple servers to receive TCP requests and have all of them share the load equally.

Load Balancing Requirements with Digital Ocean

To create a working load balancer set up, you require the following at the minimum:

  1. At least two droplets;
  2. Each droplet has to be capable of receiving and servicing TCP requests.

Static Assets:

Load balancers vary in complexity and architectures. Let’s say you have a lot of images on your site. You need to ensure that all the servers in your “cluster” are able to access those images – either via an external CDN, or by replicating them on each server. Which raises the question of how you’re going to synchronize these assets if you add or remove some. This is a complex topic and there are no easy answers. Some people settle for placing their files and other static assets on an external storage service like Amazon AWS. Others use a mixed approach with static Javascript and image files placed on CDNs.

There are many different ways to configure load balancing, and each has its challenges.


Almost every site these days is HTTPS enabled. So what does it means when your request gets routed to another server? Once more, there are different solutions. You can have a “pass-through” system where the load balancer simply passes on your request with the accompanying encryption to whatever server wants to receive it. This is an “end-to-end” scenario where the load balancer doesn’t decrypt anything. You need to set up your digital certificates on each server separately.

The other option is called “SSL termination”. It means that you store the certificate and keys on the load balancer itself. The decryption happens at the load balancer and the backend servers receive unencrypted data. Depending on your requirements and constraints, one of the above solutions might be acceptable to you.

No Multi-Region Support for Now

As of this writing, Digital Ocean requires all the droplets of a load balancing cluster to be in the same geographical region – the region you choose when setting up the cluster. This rules out advanced load balancing that routes the request to the nearest datacenter in order to minimize the round trip time for each packet. According to the support staff, it’s a feature that they are considering for future releases.

Right now however, the purpose of the Digital Ocean load balancer is to avoid having a single point of failure for a website, and to spread out the load amongst many servers instead of just one. The added benefit of geolocation isn’t available.

Now that we’ve seen how the Digital Ocean load balancers work, it’s time to take a look at how to set one up. We’ll do that in part 2 of this series.

Leave a Reply

Your email address will not be published. Required fields are marked *

Disclosure: We receive a compensation from some of the companies whose products are presented on our website.