CloudBit Load Balancers are a fully-managed, highly available network load balancing service. Load balancers distribute traffic to groups of Instances or Kubernetes Clusters, which decouples the overall health of a backend service from the health of a single server to ensure that your services stay online.
A single Load Balancer can be configured to handle multiple protocols and ports. You can control traffic routing with configurable rules that specify the ports and protocols that the load balancer should listen on, as well as the way that it should select and forward requests to the backend servers.
Because CloudBit Load Balancers are network load balancers, not application load balancers, they do not support directing traffic to specific backends based on URLs, cookies, HTTP headers, etc.
Standard HTTP balancing directs requests based on standard HTTP mechanisms. The load balancer sets the X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port headers to give the backend servers information about the original request.
If user sessions depend on the client always connecting to the same backend, a cookie can be sent to the client to enable sticky sessions.
HTTPS AND HTTP
You can balance secure traffic using either HTTPS or HTTP. Both protocols can be configured with:
SSL termination, which handles the SSL decryption at the load balancer after you add your SSL certificate and private key.
SSL passthrough, which forwards encrypted traffic to your backend Droplets. This is a good for end-to-end encryption and distributing the SSL decryption overhead, but you’ll need to manage the SSL certificates yourself.
TCP / UDP
TCP / UDP balancing is available for applications that do not speak HTTP. For example, deploying a load balancer in front of a database cluster like Galera would allow you to spread requests across all available machines.
PROXY protocol is a way to send client connection information (like origin IP addresses and port numbers) to the final backend server rather than discarding it at the load balancer. This information can be helpful for use cases like analyzing traffic logs or changing application functionality based on geographical IP.
Least connections. Requests will be forwarded to the VM with the least number of active connections.
Round robin. All VMs will receive requests in the round-robin manner.
Source IP. Requests from a unique source IP address will be directed to the same VM.
Enable/disable the Sticky session option to enable/disable session persistence. The load balancer will generate a cookie that will be inserted into each response. The cookie will be used to send future requests to the same VM.