NEW! EPYC + NVMe based VPS

Log in
+1 (855) 311-1555

How to load balance a website with NGINX and multi-location VPS Hosting

5 min read - May 15, 2025

hero image

Table of contents

  • How to load balance a website with NGINX and multi-location VPS
  • Step 1: Deploy VPS servers in different regions
  • Step 2: Set up a VPS to act as a load balancer
  • Step 3: Addressing the single point of failure
  • The problem
  • Solutions
  • Option 1: High availability with a floating IP
  • Option 2: DNS-level load balancing
  • Option 3: Anycast IP (advanced)
  • Step 4: Health checks and failover logic
  • Step 5: Point your domain to the load balancer
  • Optional: Geo-location-aware routing
  • Final thoughts

Share

Distribute website traffic across VPS servers in multiple locations using NGINX. Learn how to configure load balancing, avoid single points of failure, and improve performance.

How to load balance a website with NGINX and multi-location VPS

Load balancing your website across multiple VPS instances in different geographical locations can improve performance, reduce latency, and enhance redundancy. By using NGINX as a reverse proxy, you can distribute traffic between backend servers, each hosting a copy of your website or application.

In this guide, we’ll walk through the setup process, highlight best practices, and address the common pitfall of introducing a single point of failure with the reverse proxy—along with solutions to mitigate it.


Step 1: Deploy VPS servers in different regions

Start by deploying VPS instances in multiple geographical locations for example, one in New York, one in Frankfurt, and one in Singapore. Each VPS should run:

  • A copy of your website or application
  • NGINX (if also used as a local web server)
  • SSH access for setup and maintenance

Ensure content and configurations are consistent across all servers.


Step 2: Set up a VPS to act as a load balancer

Choose one VPS to act as your reverse proxy and load balancer, or provision a new one for this purpose. This server will route traffic to the backend VPS nodes.

Use a basic NGINX reverse proxy configuration that defines an upstream group and proxies incoming requests to your backend nodes.

Here's an example NGINX configuration for your proxy

http {
    upstream backend_servers {
        server vps1.example.com;
        server vps2.example.com;
        server vps3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://backend_servers;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }
    }
}

Step 3: Addressing the single point of failure

The problem

The VPS hosting the NGINX reverse proxy becomes a single point of failure. If this server goes down, your entire site becomes unavailable—even though your backend servers are still running.

Solutions

Option 1: High availability with a floating IP

Use tools like Keepalived or Pacemaker with VRRP to create a floating IP between two or more NGINX nodes. If one fails, the IP automatically switches to another.

Option 2: DNS-level load balancing

Run multiple NGINX load balancer nodes and use round-robin DNS or GeoDNS (e.g. AWS Route 53, Cloudflare Load Balancing) to distribute traffic across them.

Option 3: Anycast IP (advanced)

Advertise the same IP address from multiple geographic regions using BGP and Anycast. Traffic is automatically routed to the nearest node based on the user’s location.

Tip: Combining DNS-based geographic routing with highly available NGINX proxies provides the best coverage and resilience.


Step 4: Health checks and failover logic

While NGINX Open Source doesn't support active health checks natively, it will stop routing to a failed node after connection errors. For more advanced health checking:

  • Use NGINX Plus
  • Or build external monitoring and failover logic with cron + curl + config reloads

Step 5: Point your domain to the load balancer

Update your DNS records to point to the IP address of your NGINX reverse proxy (or floating IP if using HA). If you use multiple proxies, configure your DNS provider for load-balanced or geo-aware resolution.


Optional: Geo-location-aware routing

NGINX itself doesn't handle geolocation-based routing, but you can pair it with:

  • GeoDNS: Use a DNS provider that routes users to the closest server
  • Anycast IPs: Distribute the same IP from multiple data centers, allowing global routing optimization
User Request
     |
     v
+---------------------+
| GeoDNS / Load-aware |
| DNS Routing Layer   |
+---------------------+
        |
        v
+----------------------+
| Regional NGINX Proxy |
| (HA or Anycast IP)   |
+----------------------+
        |
        v
+---------------------+
|  VPS Backend Nodes  |
+---------------------+
---

Final thoughts

Using NGINX to load balance across multiple VPS servers helps you scale globally and reduce latency. But remember: the reverse proxy must be highly available or it becomes a liability.

To eliminate single points of failure, consider DNS-based load distribution, floating IPs, or Anycast networking. With careful planning, your multi-location VPS setup can deliver fast, fault-tolerant performance at scale.

This guide is only appropriate for a web-front end, and doesn't cover connections to a database and the problems and solutions distributing this for high availability. We'll cover that in a later article...

Blog

Featured this week

More articles
How to load balance a website with NGINX and multi-location VPS Hosting

How to load balance a website with NGINX and multi-location VPS Hosting

Distribute website traffic across VPS servers in multiple locations using NGINX. Learn how to configure load balancing, avoid single points of failure, and improve performance.

5 min read - May 15, 2025

A guide to AI inference hosting on Dedicated Servers and VPS

5 min read - May 13, 2025

More articles
background image

Have questions or need a custom solution?

icon

Flexible options

icon

Global reach

icon

Instant deployment

icon

Flexible options

icon

Global reach

icon

Instant deployment