Installing AppFormix for OpenStack in HA
HA Design Overview
AppFormix Platform can be deployed to multiple hosts for high availability (HA). Platform services continue to communicate using an API proxy that listens on a virtual IP address. Only one host will have the virtual IP at a time, and so only one API proxy will be the “active” API proxy at a time.
The API proxy is implemented by HAProxy. HAProxy is configured to use services in active-standby or load-balanced active-active mode, depending on the service.
At most, one host will be assigned the virtual IP at any given time. This host is considered the “active” HAproxy. The virtual IP address is assigned to a host by keepalived, which uses VRRP protocol for election.
Services are replicated in different modes of operation. In the “active-passive” mode, HAProxy sends all requests to a single “active” instance of a service. If the service fails, then HAProxy will select a new “active” from the other hosts, and begin to send requests to the new “active” service.In the “active-active” mode, HAProxy load balances requests across hosts on which a service is operational.
AppFormix Platform can be deployed in a 3-node, 5-node, or 7-node configuration for high availability.
Each host, on which AppFormix Platform is installed, has the following requirements.
CPU: 8 cores (virtual or physical)
Memory: 32 GB
Storage: 100 GB (recommended)
Docker 17.03.1-ce, installed on the Platform Host(s).
Python "docker" package 3.7.1, installed on the Platform Host(s).
Ansible 2.3.0 - 2.7.6, installed on a host that has SSH access to Platform Hosts and compute hosts to which AppFormix will be deployed.
httplib2 must be installed on the host where Ansible is executed.
One virtual IP address to be shared among all the Platform Hosts. This IP address should not be used by any host before installation. It should have reachability from all the Platform Hosts after installation.
Dashboard client (in browser) must have IP connectivity to the virtual IP.
IP addresses for each Platform Host for installation and for services running on these hosts to communicate.
keepalived_vrrp_interface for each Platform Host which would be used for assigning virtual IP address. Details on how to configure this interface is described in the sample_inventory section.
The installer node needs to download the following packages from https://www.juniper.net/support/downloads/?p=appformix#sw.
AppFormix Agent Supported Platforms
AppFormix Agent runs on a host to monitor resource consumption of the host itself and the virtual machines and containers executing on that host.
Red Hat Enterprise Linux 7.1
Red Hat Enterprise Linux 6.5, 6.6
CentOS 6.5, 6.6
Installing AppFormix for High Availability
To install AppFormix to multiple hosts for high availability:
- Install Ansible on the installer node. Ansible will install
docker and docker-py on the appformix_controller.
# sudo apt-get install python-pip python-dev build-essential libssl-dev libffi-dev # sudo pip install ansible==2.7.6 markupsafe httplib2
For Ansible 2.3:
# sudo pip install ansible==2.3 markupsafe httplib2 cryptography==1.5
- Install python and python-pip on all the Platform Hosts
so that Ansible can run between the installer node and the appformix_controller
# sudo apt-get install -y python python-pip
- Install python pip package on the hosts where AppFormix
# apt-get install -y python-pip
- To enable passwordless login to all Platform Hosts by
Ansible, create an SSH public key on the node where Ansible playbooks
are run and then copy the key to all the Platform Hosts.
# ssh-keygen -t rsa #Creates Keys # ssh-copy-id -i ~/.ssh/id_rsa.pub <platform_host_1>.........#Copies key from the node to all platform hosts # ssh-copy-id -i ~/.ssh/id_rsa.pub <platform_host_2>.........#Copies key from the node to all platform hosts # ssh-copy-id -i ~/.ssh/id_rsa.pub <platform_host_3>.........#Copies key from the node to all platform hosts
- Use the sample_inventory file as a template to create
a host file. Add all the Platform Hosts and compute hosts details.
# List all compute hosts which needs to be monitored by AppFormix [compute] 203.0.113.5 203.0.113.17 # AppFormix controller hosts [appformix_controller] 203.0.113.119 keepalived_vrrp_interface=eth0 203.0.113.120 keepalived_vrrp_interface=eth0 203.0.113.121 keepalived_vrrp_interface=eth0
Note: In the case of 5-node or 7-node deployment, list all the nodes under appformix_controller.
- At top-level of the distribution, create a directory named
group_varsand then create a file named
allinside this directory.
# mkdir group_vars # touch group_vars/all
Add the following entries to the newly created
appformix_vip: <ip-address> appformix_docker_images: - /path/to/appformix-platform-images-<version>.tar.gz - /path/to/appformix-dependencies-images-<version>.tar.gz - /path/to/appformix-openstack-images-<version>.tar.gz
In AppFormix version 3.2.0, support for monitoring Openstack Octavia LoadBalancer services has been added. To enable this service monitoring, provide Octavia service's endpoint as variable
group_vars/allfile. For example:
- Copy and source the
openrcfile from the OpenStack controller node (
/etc/contrail/openrc) to the AppFormix Controller to authenticate the adapter to access admin privileges over the controller services.
root@installer_node:~# cat /etc/contrail/openrc export OS_USERNAME=<admin user> export OS_PASSWORD=<password> export OS_TENANT_NAME=admin export OS_AUTH_URL=http://<openstack-auth-URL>/v2.0/ export OS_NO_CACHE=1 root@installer_node:~# source /etc/contrail/openrc
- Run Ansible with the created inventory file.
ansible-playbook -i inventory appformix_openstack_ha.yml
- If running the playbooks as root user then this step can
be skipped. As a non-root user (for example. “ubuntu”),
the user “ubuntu” needs access to the
dockeruser group. The following command adds the user to the docker group.
sudo usermod -aG docker ubuntu
If step 8. is being done with offline installation and failed
due to step 8. not being done, then the appformix *.tar.gz need to
be removed from the
/tmp/ folder on the
appformix_controller node. This is the workaround required as of version