Skip to content

itu-devops/scaling_exercises

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Scaling Exercises

What is this?

This is a basic demonstration of a high-availability on local virtual machines, see Vagrantfile. It consists of the following:

  • Two redundant load balancer server nodes, each with Nginx and Keepalived installed and configured, see provision-lb.sh
  • Two web server nodes, each with Nginx installed serving a static HTML page, see provision-web.sh. These, simulate backend servers.
  • A virtual IP (192.168.20.100), which simulates DigitalOcean's floating IP

Architecture

          Virtual IP: 192.168.20.100
                        |
              +---------+---------+
              |                   |
    lb1 (192.168.20.4)      lb2 (192.168.20.5)
 [Primary - Priority 255] [Backup - Priority 250]
              |                   |
              +---------+---------+
                        |
              +---------+---------+
              |                   |
    web1 (192.168.20.2)     web2 (192.168.20.3)

Starting the Environment

Start all virtual machines

vagrant up

Or start them individually

vagrant up web1 web2 lb1 lb2

Testing the Setup

1. Access the virtual IP

curl http://192.168.20.100

You should see responses from either web1 or web2.

2. Test load balancing

for i in {1..10}; do curl -s http://192.168.20.100 | grep "Server:"; done

You should see requests distributed between web1 and web2.

3.a) Test High-availability

Check which load balancer has the virtual IP:

vagrant ssh lb1 -c "ip addr show eth1"
vagrant ssh lb2 -c "ip addr show eth1"

Stop the primary load balancer:

vagrant halt lb1

keepalived should reassign the virtual IP to the failover, i.e., lb2. now, test again:

curl http://192.168.20.100

Verify, that the backend servers still can be reached via the failover load balancer.

Now, bring lb1 back up:

 vagrant up lb1

3.b) Test High-availability

Check which load balancer has the virtual IP:

vagrant ssh lb1 -c "ip addr show eth1"
vagrant ssh lb2 -c "ip addr show eth1"

Now, test web server failure either by stopping keepalived or Nginx on lb1:

vagrant ssh lb1 -c "sudo systemctl stop keepalived"
 vagrant ssh web1 -c "sudo systemctl stop nginx"

Verify that requests are still served by web2:

curl http://192.168.20.100

Checking Status of Components

Check Keepalived Status

vagrant ssh lb<1|2> -c "sudo systemctl status keepalived"

Check Nginx Status

vagrant ssh lb<1|2> -c "sudo systemctl status nginx"

View Keepalived Logs

vagrant ssh lb<1|2> -c "sudo journalctl -u keepalived -f"

Check which Server has the Virtual IP

vagrant ssh lb1 -c "ip addr show eth1 | grep 192.168.20.100"
vagrant ssh lb2 -c "ip addr show eth1 | grep 192.168.20.100"

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages