Eliminate A Single Point Of Failure

WHY A CLUSTER

Load balancing is a task to provide a single network service from multiple servers, sometimes known as a server farm.

The load balancer is usually a software program embedded in a virtual or physical hardware that is listening on the port where external clients connect to access services. The load balancer forwards requests to one of the “backend” servers, which usually replies to the load balancer. This allows the load balancer to reply to the client without the client ever knowing about the internal separation of functions. It can automatically provide the amount of capacity needed to respond to any increase or decrease of application traffic. In the next diagram we can see a typical network configuration for serving a web application.

sinsgle point of failure

It also prevents clients from contacting with back-ends directly, which may have security benefits by hiding the structure of the internal network and preventing attacks on the kernel’s network stack or unrelated services running on other ports, but… what happens if the load balancer fails?

Load balancing gives the IT team a chance to achieve a significantly higher fault tolerance. That is the reason why the load balancer itself does not become a single point of failure. Load balancers must be implemented in high­ availability cluster, i.e., a pair of appliances. The same example with an HA cluster for load balancing can be:

no single point of failure

We have seen the importance of setting an HA cluster. Now we are going to see how to configure it with Zen Load Balancer.

CLUSTER REQUIREMENTS

  1. Ensure that ntp service is configured and working fine in both future nodes of the cluster.
  2. Ensure that the DNS are configured and working fine in both future nodes of the cluster.
  3. Ensure that there is ping between the future cluster members through the used interface for the cluster (i.e.: IP of eth0 in node1 and IP of eth0 in node2).
  4. It is recomended to dedicate a interface for the cluster service.
  5. Cluster service uses multicast communication and vrrp packages, ensure that this kind of packages are allowed in your switches.
  6. The cluster interface (i.e.: eth0:cl) and cluster service have to be configured from the node that want to be master.
  7. Once the cluster service is enabled the farms and Interfaces on the slave node will be deleted, only the cluster network interface and webgui network interface is going to be remained on the slave.

CLUSTER CONFIGURATION

  1. Into the web gui interface, go to menu tab “Settings > Interfaces” and click on “Add virtual network interface” icon, enter a short name in the Name column (i.e.: cl) and add a new IP in the same range of the parent interface) and press “save virtual interface” icon.
  2. Go to menu tab “Settings > Cluster”, and selecte the configured interface in the last point, field “Virtual IP for cluster, or create new virtual here” and press “Save VIP”
  3. Automatically the Local values hostname and eth? IP are going to be loaded. now configure the Remote hostname field and eth? IP of the second cluster node, take care the cluster service has to be configured in the same network interface in both nodes (i.e. eth0)
  4. Once the real cluster interface (i.e. eth0:cl) , member hostnames (i.e. node1 & node2) and real IP are selected (i.e.: eth0) and Virtual IP cluster is configured (i.e. eth0:cl) you have to select the Cluster ID, configure a value between 1 and 255 in the Cluster ID field. This value is unique by a cluster pair node.
  5. The next step is to configure the Dead ratio value for the cluster, it is the time (seconds) used by the slave node to determine how long to wait for an unresponsive master before considering it dead.
  6. Press “Save” button to follow with the cluster configuration process.
  7. The cluster service replicates the NICs and farms configuration through ssh, so in this point you need to configure the RSA conection. Please enter the slave root password in the Field “Remote Hostname root password” field and press the button “Configure RSA connection between nodes”, wait meanwhile the ssh RSA authentication is configured.
  8. Once the RSA authentication is configured you have to configure the cluster type: – automatic failback. If master node goes down then the service is switched to node2, once master node is restablished the cluster service will switch to master automatically. – once of them can be masters. if the master node goes down then the service is switched to node2 (without failback). Select one of both values and press “Configure cluster type” button.
  9. If you are configuring physical servers and want to communicate both servers through cable (not to use switches) then you have to select the checkbox “Use crossover patch cord”

If you need further information, please visit the article Configure the cluster service

Was this article helpful?

Related Articles