L4 vs L7 Load Balancing

Mohak Puri
Level Up Coding
Published in
5 min readApr 19, 2020

--

Load balancing is one of the primary features of a proxy. The layer on which a load balancer operates provides different capabilities to it. These capabilities and the internals of L4/L7 load balancer will be the focus of this article.

Layer4 Load Balancers

TCP/UDP passthrough L4 Load balancer

L4 as the name suggests works on Layer4 (and Layer3) of the OSI model. When a client makes a request, it creates a TCP connection with the load balancer. The Load Balancer then uses the same TCP connection that the client created with it, to connect with one of the upstream servers.

Transport layer adds source and destination port. Network layer adds source and destination IP (These layers do a lot more than this)

However, there is a difference. The source and destination IP of each packet is changed by the load balancer using NAT (Network address translation).

Source                 Dest         Source              Dest
----------- ---------- ------- ------- -------- --------
| Client IP | Segment | LB IP | --> | LB IP | Segment | Server |
----------- ---------- ------- ------- -------- --------
Changing source an destination IP for every request packet

When a response is received from the server, the same translation is performed again at the load balancer.

Source                 Dest         Source              Dest
----------- ---------- ------- ------- -------- ---------
| Server IP | Segment | LB IP | --> | LB IP | Segment | ClientIP |
----------- ---------- ------- ------- -------- ---------
Changing source an destination IP for every response packet

There is another type of L4 load balancer known as TCP/UDP termination load balancers where there are two different TCP connections.

When using L4 load balancers, we are unaware of the data. This means we cannot make any decisions based on data in our request. The only thing we have is IPs (source and destination)and ports.

Consider the following request

curl -X GET http://apis.pay.com/v1/payments/${paymentId} -H 'Authorization:Basic fdklakglkadskl='

Let’s say you want to return 401 in case the Authorization header is empty or you want to route the call to a service based on the path. With L4 load balancers, this is not possible as you don’t have access to the request data.

Keep-Alive connection

Also, load balancing multiplexing (HTTP2 streams), kept-alive protocol is an issue. (Multiplexing is sending multiple requests over a single connection, and kept-alive is not closing the connection for some time). Consider a case where two clients A and B make a request to a load balancer with two upstream servers C and D(assume keep alive connection). Let’s say A is connected to server C and B is connected to the server D. If A makes 1 RPS and B makes 50 RPS then D is handling 50x more request than server C which is actually defeating the purpose of load balancing.

Cons

  1. No smart load balancing
  2. Doesn’t work with streaming/keep-alive connections
  3. No TLS termination (Good or Bad you decide!)

L4 load balancers are not actually L4, they are a combination of L3 and L4 so you can actually call them L3/L4 Load balancers.

Layer7 Load Balancers

L7 Load balancer

L7 as the name suggests works on Layer7 (Layer6 and Layer5) of the OSI model. When a client makes a request, it creates a TCP connection with the load balancer. The Load Balancer then creates a new TCP connection with one of the upstream servers. Thus, there are 2 TCP connections as compared to 1 in a TCP/UDP passthrough L4 Load balancer.

Since we are at layer7, we are aware of the data in our request. This allows us to perform a variety of operations like

  1. Authentication — 401 if some header is not present
  2. Smart Routing — Route /payments call to a particular upstream
  3. TLS termination
Keep-Alive connection

In the case of multiplexed/keep-alive protocols, L7 load balancer work like a charm. L7 load balancer creates a TCP connection with every upstream for a single client connection rather than choosing a single upstream. This means when A creates a connection with the load balancer, the load balancer creates two connections one with C and one with D.

L7 load balancers are not actually L7, they are a combination of L5, L6 and L7 so you can actually call them L5 through L7 Load balancers.

Demo

For the demo, I am going to use the Envoy proxy to demonstrate a simple example.

Return 401 if Authorization Header is not present in the request. If the request path matches /ping then get Pong response. All other paths should be ignored. 401 should be returned even before reaching the service.

For running envoy locally, we are going to use docker. Here is what the Dockerfile and envoy config look like.

Dockerfile
envoy.yaml

Now let’s create a basic ping pong application

Ping Pong app

You can either build a docker image locally using the docker file or can use the image I have already pushed on docker hub

Now let's run the docker container using the following commanddocker run -p 80:80 mohak1712/envoy

Once the application and envoy are running, we can do a basic curl request

200 & 401 response code
404 & 401 response code

One interesting log in all the output is Connection #0 to the host localhost left intact. This enables a client to use the same connection for the other requests.

Remember your connection is with Envoy and not with your backend server.

For testing keep-alive connections I’ll start the java server on two different ports, update envoy.yaml to include both servers in the cluster and return the port as a response.

clusters:
- name: ping_pong_service
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
hosts: [{ socket_address: { address: host.docker.internal, port_value: 8080 }},{ socket_address: { address: host.docker.internal, port_value: 8181 }}]
Keep-alive connections

If you notice we got responses from two different servers [Response -> 8080,8081]. Thus even on keep-alive connections request is getting distributed among upstream services. (backend servers).

That’s about it! Thank you for reading, and I hope you enjoyed the article. If you did make sure to give it a clap :)

Also, I would highly recommend going through this article

You can also follow me on Medium and Github. 🙂

--

--

Lead Software Engineer @INDmoney | ex GO-JEK | GSoC 2018 @openMF | Mobile | Backend | mohak1712 everywhere