Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Installing OpenStack Octavia LBaaS with Juju Charms in Contrail Networking

 

Contrail Networking Release 2005 supports Octavia as LBaaS. The deployment supports RHOSP and Juju platforms.

With Octavia as LBaaS, Contrail Networking is only maintaining network connectivity and is not involved in any load balancing functions.

For each OpenStack load balancer creation, Octavia launches a VM known as amphora VM. The VM starts the HAPROXY when listener is created for the load balancer in OpenStack. Whenever the load balancer gets updated in OpenStack, amphora VM updates the running HAPROXY configuration. The amphora VM is deleted on deleting the load balancer.

Contrail Networking provides connectivity to amphora VM interfaces. Amphora VM has two interfaces; one for management and the other for data. The management interface is used by the Octavia services for the management communication. Since, Octavia services are running in the underlay network and amphora VM is running in the overlay network, SDN gateway is needed to reach the overlay network. The data interface is used for load balancing.

Follow the procedure to install OpenStack Octavia LBaaS in Canonical deployment:

  1. Prepare Juju setup with OpenStack Train version and Octavia overlay bundle.

    Refer to Sample octavia-bundle.yaml file output.

    juju deploy --overlay=./octavia-bundle.yaml ./contrail-bundle.yaml

    or

    Add Octavia service after deploying the main bundle on the existing cluster.

    juju deploy --overlay=./octavia-bundle.yaml --map-machines=existing ./contrail-bundle.yaml
  2. Prepare ssh key for amphora VM. Add the options in the octavia-bundle.yaml file.
    ssh-keygen -f octavia # generate the key base64 octavia.pub # print public key data

    Add the following options to Octavia options.

    amp-ssh-pub-key: # paste public key data here amp-ssh-key-name: octavia
  3. Generate certificates.

    Make sure all the units are in active or blocked state.

  4. Configure vault service.
    1. SSH into the machine where vault service is installed.
      juju ssh vault/0
    2. Export vault address and run init.
      export VAULT_ADDR='http://localhost:8200'

      /snap/bin/vault operator init -key-shares=5 -key-threshold=3

      It will print 5 unseal keys and initial root token.

    3. Call unseal command by using any three of the five printed unseal keys.
      /snap/bin/vault operator unseal Key1

      /snap/bin/vault operator unseal Key2

      /snap/bin/vault operator unseal Key3
    4. Export initial root token.
      export VAULT_TOKEN="..."
    5. Create user token.
      /snap/bin/vault token create -ttl=10m
    6. Exit from vault’s machine and initialize vault’s charm with the user token.
      juju run-action --wait vault/leader authorize-charm token=”...”
  5. Create amphora image.
    juju run-action --wait octavia-diskimage-retrofit/leader retrofit-image

    For more details, refer to https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/app-octavia.html#amphora-image.

  6. Install python-openstackclient and python-octaviaclient and create management network for Octavia.

    You must create these objects in services project.

    project=$(openstack project list --domain service_domain | awk '/services/{print $2}')

    openstack network create octavia --tag charm-octavia --project $project

    openstack subnet create --subnet-range 172.x.0.0/24 --network octavia --tag charm-octavia octavia

    # security group for octavia

    openstack security group create octavia --tag charm-octavia --project $project

    openstack security group rule create --ingress --ethertype IPv4 --protocol icmp octavia

    openstack security group rule create --ingress --ethertype IPv6 --protocol icmp octavia

    openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22:22 octavia

    openstack security group rule create --ingress --ethertype IPv6 --protocol tcp --dst-port 22:22 octavia

    openstack security group rule create --ingress --ethertype IPv6 --protocol tcp --dst-port 9443:9443 octavia

    openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 9443:9443 octavia

    # security group for octavia-health

    openstack security group create octavia-health --tag charm-octavia-health --project $project

    openstack security group rule create --ingress --ethertype IPv4 --protocol icmp octavia-health

    openstack security group rule create --ingress --ethertype IPv6 --protocol icmp octavia-health

    openstack security group rule create --ingress --ethertype IPv4 --protocol udp --dst-port 5555:5555 octavia-health

    openstack security group rule create --ingress --ethertype IPv6 --protocol udp --dst-port 5555:5555 octavia-health
  7. The management network created in step 6 is in overlay network and Octavia services are running in the underlay network. Verify network connectivity between overlay and underlay network via SDN gateway.
  8. Configure Octavia with the created network.
    juju run-action --wait octavia/leader configure-resources

    Make sure the juju cluster is functional and all units have active status.

If you want to run amphora instances on DPDK computes, you have to create your own flavor with the required options and set the ID to configuration of Octavia charm via custom-amp-flavor-id option before call configure-resources.

Or

Set the required options to created flavor with name charm-octavia by charm

openstack flavor set charm-octavia --property hw:mem_page_size=any

Here is an example for creating and testing load balancer:

Prerequisites:

  • You must have connectivity between Octavia controller and amphora instances,

  • You must have OpenStack services into LXD containers.

  • You must have separate interfaces for control plane and data plane.

  1. Create private network.
    openstack network create private

    openstack subnet create private --network private --subnet-range 10.10.10.0/24 --allocation-pool

    start=10.10.10.50,end=10.10.10.70 --gateway none
  2. Create security group.
    openstack security group create allow_all

    openstack security group rule create --ingress --protocol any --prefix '0.0.0.0/0' allow_all
  3. Check available flavors and images. You can create them, if needed.
    openstack flavor list

    openstack image list
  4. Create two servers for load balancer.
    openstack server create --flavor test_flavor --image cirros --security-group allow_all --network private cirros1

    openstack server create --flavor test_flavor --image cirros --security-group allow_all --network private cirros2
  5. Create additional server to test load balancer.
    openstack server create --flavor test_flavor --image cirros --security-group allow_all --network private cirros-test
  6. Check status and IP addresses.
    openstack server list --long
  7. Create simple HTTP server on every cirros. Login on both the cirros instances and run following commands:
    MYIP=$(ifconfig eth0|grep 'inet addr'|awk -F: '{print $2}'| awk '{print $1}') while true; do echo -e "HTTP/1.0 200 OK\r\n\r\nWelcome to $MYIP" | sudo nc -l -p 80 ; done&
  8. Create load balancer
    openstack loadbalancer create --name lb1 --vip-subnet-id private

    Make sure provisioning_status is Active.

    openstack loadbalancer show lb1
  9. Setup load balancer
    openstack loadbalancer listener create --protocol HTTP --protocol-port 80 --name listener1 lb1

    openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.

    openstack loadbalancer pool create --lb-algorithm ROUND_ROBIN --listener listener1 --protocol HTTP --name pool1

    openstack loadbalancer healthmonitor create --delay 5 --timeout 2 --max-retries 1 --type HTTP pool1

    openstack loadbalancer member create --subnet-id private --address 10.10.10.50 --protocol-port 80 pool1

    openstack loadbalancer member create --subnet-id private --address 10.10.10.51 --protocol-port 80 pool1

    IP addresses 10.10.10.50 and 10.10.10.51 belong to VMs created with test http server in step 7.

  10. Check the status of load balancer.
    openstack loadbalancer show lb1 # Wait for the provisioning_status to be ACTIVE.

    openstack loadbalancer pool list

    openstack loadbalancer pool show pool1

    openstack loadbalancer member list pool1

    openstack loadbalancer listener list
  11. Login to load balancer client and verify if round robin works.
    ubuntu@comp-1:~$ ssh cirros@169.x.0.9

    The authenticity of host '169.x.0.9 (169.x.0.9)' can't be established.

    RSA key fingerprint is SHA256:jv0qgZkorxxxxxxxmykOSVQV3fFl0.

    Are you sure you want to continue connecting (yes/no)? yes

    Warning: Permanently added '169.x.0.9' (RSA) to the list of known hosts.

    cirros@169.x.0.9's password:

    $ curl 10.10.10.50

    Welcome to 10.10.10.52

    $ curl 10.10.10.50

    Welcome to 10.10.10.53

    $ curl 10.10.10.50

    Welcome to 10.10.10.52

    $ curl 10.10.10.50

    Welcome to 10.10.10.53

    $ curl 10.10.10.50

    Welcome to 10.10.10.52

    $ curl 10.10.10.50

    Welcome to 10.10.10.53

Sample octavia-bundle.yaml file

Release History Table
Release
Description
Contrail Networking Release 2005 supports Octavia as LBaaS.