top of page

Nginx Load Balancer

Nginx is a well-known open-source web server and reverse proxy server that also serves as a load balancer. Setting up a Nginx load balancer entails configuring Nginx to spread incoming traffic over numerous backend servers to ensure effective resource utilisation and improve your web application's overall dependability and performance.


Here are the steps to set up an Nginx load balancer:


1. Install Nginx:


Make sure Nginx is installed on the machine that will act as the load balancer. You can typically install it using the package manager for your operating system.

[root@siddhesh ~]# yum install nginx

2. Configure Backend Servers:


Configure the backend servers that will be handling the actual application. These servers should be set up and running with your application.

[root@siddhesh ~]# cat /etc/nginx/conf.d/server1.conf
server {
    listen 8005;
    server_tokens off;
    server_name _;
    client_max_body_size 35M;
    charset utf-8;
    large_client_header_buffers 4 16k;
    root /usr/share/nginx/server1;
    index index.html index.php index.cgi;
}
[root@siddhesh ~]# cat /etc/nginx/conf.d/server2.conf
server {
    listen 8006;
    server_tokens off;
    server_name _;
    client_max_body_size 35M;
    charset utf-8;
    large_client_header_buffers 4 16k;
    root /usr/share/nginx/server2;
    index index.html index.php index.cgi;
}
[root@siddhesh ~]# cat /etc/nginx/conf.d/server3.conf
server {
    listen 8007;
    server_tokens off;
    server_name _;
    client_max_body_size 35M;
    charset utf-8;
    large_client_header_buffers 4 16k;
    root /usr/share/nginx/server3;
    index index.html index.php index.cgi;
}
[root@siddhesh ~]# cat  /usr/share/nginx/server1/index.html
Application 1
[root@siddhesh ~]# cat  /usr/share/nginx/server2/index.html
Application 2
[root@siddhesh ~]# cat /usr/share/nginx/server3/index.html
Application 3
[root@siddhesh ~]#

The above configuration files (server1.conf, server2.conf, and server3.conf) define three separate NGINX server blocks, each configured to listen on a different port (8005, 8006, and 8007, respectively).


3. Configure Nginx Load Balancer: Edit the Nginx configuration file (/etc/nginx/nginx.conf) to include the load balancing configuration.

[root@siddhesh ~]# cat /etc/nginx/nginx.conf
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/.conf;
events {
    worker_connections 1024;
}
http {
    upstream appstack {
        server 127.0.0.1:8005 weight=1;
        server 127.0.0.1:8006 weight=1;
        server 127.0.0.1:8007 weight=1;
    }
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    # Load modular configuration files from the /etc/nginx/conf.d directory.
    # See http://nginx.org/en/docs/ngx_core_module.html#include
    # for more information.
    include /etc/nginx/conf.d/.conf;
server {
        listen 80;
        location / {
            proxy_pass http://appstack;
        }
    }
}
[root@siddhesh ~]#

In this example, the upstream block defines a group of appstack servers. The server block within the http block configures the actual web server and the location block specifies how requests should be proxied to the backend servers.


4. Restart Nginx:


After making changes to the configuration, restart Nginx to apply the new settings.

[root@siddhesh ~]# systemctl restart nginx

5. Testing:


Test the setup by accessing the Nginx load balancer's IP address or domain name in a web browser. Verify that the requests are distributed among the backend servers. You can also use below one-liner script of bash below to test:

[root@siddhesh ~]# while true; do   elinks --dump http://localhost; sleep 1; done
   Application 1
   Application 2
   Application 3
   Application 1
   Application 2
   Application 3
   Application 1
   Application 2
   Application 3
   Application 1
   Application 2
   Application 3
   Application 1
   Application 2
   Application 3
[root@siddhesh ~]#

As you can see, every request is being forwarded to the hosts added under the upstream spool.

bottom of page