Zevenet Cluster Service can be configured like an independent piece of software outside of Zevenet CE core package, this new Zevenet cluster service has been developed with the idea of being easily managed and modified by sysadmins in order to adapt it to the needs of any network architecture.
The next procedure describes how to install and configure Zevenet Cluster in case of High availability for your Load Balancer is required.
Configure our official APT repository as follows:
How to configure APT repository for ZEVENET Community Edition
Install Zevenet CE cluster package
Once the local database repository is updated please search the cluster package zevenet-ce-cluster as follows:
root@lb1 > apt-cache search zevenet-ce-cluster zevenet-ce-cluster - Zevenet Load Balancer Community Edition Cluster Service root@lb1 > apt-cache show zevenet-ce-cluster Package: zevenet-ce-cluster Version: 1.2 Maintainer: Zevenet SL <email@example.com> Architecture: i386 Depends: zevenet (>=5.0), liblinux-inotify2-perl, ntp Priority: optional Section: admin Filename: pool/main/z/zevenet-ce-cluster/zevenet-ce-cluster_1.0_i386.deb Size: 43350 SHA256: e39bb9b8283904db2873287147c885637178e179be5dee67b2c7044039899f35 SHA1: 425d742cde523c93a55b25e96447a8088663a028 MD5sum: 123abcf0eab334a18054802962287dc7 Description: Zevenet Load Balancer Community Edition Cluster Service Cluster service for Zevenet CE, based in ucarp for vrrp implementation and zeninotify for configuration replication. VRRP through UDP is supported in this version. Description-md5: 5b668a78c0d00cdf89ac66c47b44ba28 root@lb1 > apt-get install zevenet-ce-cluster Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: liblinux-inotify2-perl Suggested packages: iwatch The following NEW packages will be installed: liblinux-inotify2-perl zevenet-ce-cluster 0 upgraded, 2 newly installed, 0 to remove and 37 not upgraded. Need to get 43.4 kB/61.4 kB of archives. After this operation, 60.4 kB of additional disk space will be used. Do you want to continue? [Y/n] Get:1 http://repo.zevenet.com/ce/v5 stretch/main i386 zevenet-ce-cluster i386 1.0 [43.4 kB] Fetched 43.4 kB in 0s (57.3 kB/s) Selecting previously unselected package liblinux-inotify2-perl. (Reading database ... 57851 files and directories currently installed.) Preparing to unpack .../liblinux-inotify2-perl_1%3a1.22-3_i386.deb ... Unpacking liblinux-inotify2-perl (1:1.22-3) ... Selecting previously unselected package zevenet-ce-cluster. Preparing to unpack .../zevenet-ce-cluster_1.0_i386.deb ... Unpacking zevenet-ce-cluster (1.0) ... Setting up liblinux-inotify2-perl (1:1.22-3) ... Processing triggers for systemd (232-25+deb9u1) ... Processing triggers for man-db (18.104.22.168-2) ... Setting up zevenet-ce-cluster (1.0) ... Completing the Zevenet CE Cluster installation...
Notice that Zevenet CE Cluster use VRRP and the synchronization time is mandatory for this protocol, so ensure your NTP service is properly configured and NTP servers are reachable from the Load Balancer.
Configure Zevenet CE cluster package
Once the installation is concluded, please configure the cluster service as follows:
Open the configuration file in the path /usr/local/zevenet/app/ucarp/etc/zevenet-cluster.conf
The most important parameters are described next:
#interface used for the cluster where is configured local_ip and remote_ip $interface="eth0"; #local IP to be monitored, i e 192.168.0.101 $local_ip="192.168.101.242"; #remote IP to be monitored, i e 192.168.0.102 $remote_ip="192.168.101.243"; #used password for vrrp protocol communication $password="secret"; #unique value for vrrp cluster in the network $cluster_id="1"; #used virtual IP in the cluster, this IP will run always in the master node $cluster_ip="192.168.101.244"; # if the nic used for cluster is different to eth0 then please change the exclude conf file in following line ######## $exclude="--exclude if_eth0_conf";
Notice that only virtual interfaces are replicated, so if you are running with more than one NIC or VLAN then they have to be excluded in the cluster configuration file, for example, eth0 is used for cluster purpose and vlan100 (eth0.100) for load balancing purpose, then:
$exclude="--exclude if_eth0_conf --exclude if_eth0.100_conf";
Notice that zevenet cluster is managed by root user and it replicates the configuration from master node to backup through rsync (ssh) so ssh without password between nodes need to be configured.
Notice that the defined $cluster_ip has to be configured and UP in one Zevenet virtual load balancer, the future Master, as soon the service is started in this node the configuration file for $cluster_ip will be replicated to backup server automatically.
Now enable the cluster service with the following two steps:
First open the file /etc/init.d/zevenet-ce-cluster and change the following variable:
Secondly, the service zevenet-ce-cluster is disabled by default after boot, please execute the following command to enable zevenet-ce-cluster after reboot:
 root@lb1 > systemctl enable zevenet-ce-cluster
Take into account that any change in the configuration file /usr/local/zevenet/app/ucarp/etc/zevenet-cluster.conf requires a restart of the cluster service, so once the configuration parameters are done please restart the cluster in both nodes as follows:
 root@lb1 > /etc/init.d/zevenet-ce-cluster stop  root@lb1 > /etc/init.d/zevenet-ce-cluster start
Notice that as soon the cluster service is running the prompt in the load balancer is modified in order to show the cluster status in each service:
Logs and troubleshootings
- SSH without password is required between both cluster nodes
- ntp is required to be configured in both cluster nodes
- Zeninotify service only will run in the master node, please confirm zeninotify is running with the following command:You should get something like this in the master node:
[master] root@lb1> ps -ef | grep zeninotify root 16912 1 0 03:20 ? 00:00:00 /usr/bin/perl /usr/local/zevenet/app/zeninotify/zeninotify.pl
And you should see nothing related to zeninotify in backup node.
[backup] root@lb2> ps -ef | grep zeninotify [backup] root@lb2>
- Logs for ucarp service are sent to syslog /var/log/syslog
- Logs for zeninotify replication service are sent to /var/log/zeninotify.log
- Cluster status is shown in the prompt and it is updated after any command execution, additionally the cluster status is saved in config file: /etc/zevenet-ce-cluster.status, if this file doesn’t exist then cluster service is stopped.
- At the moment of the cluster node promotes to MASTER the following script is executed: /usr/local/zevenet/app/ucarp/sbin/zevenet-ce-cluster-start
- At the moment of the cluster node promotes to BACKUP the following script is executed: /usr/local/zevenet/app/ucarp/sbin/zevenet-ce-cluster-stop
- At the moment of the cluster node needs to run advertisements the following script is executed: /usr/local/zevenet/app/ucarp/sbin/zevenet-ce-cluster-advertisement
- In case you need to change any parameter in the ucarp execution you can modify the execution function for ucarp in the script /etc/init.d/zevenet-ce-cluster subrutine run_cluster()
- Cluster service uses VRRP implementation, so multicast packages need to be allowed in the switches