Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation
Guide That Contains This Content
[+] Expand All
[-] Collapse All
     
     

    Installing and Configuring the Cloud CPE Solution

    Before You Begin

    Before you begin, provision the virtual machines (VMs) for the Contrail Service Orchestration node or server.

    Creating the Deployment Directory

    The deployment directories contain the files that you need to install the Cloud CPE Solution. You need to run a script to create central and regional deployment directories. These directories contain files that you use for the deployment.

    To create the deployment directories:

    1. Log in as root to the VM on which you deployed the installer.
    2. Access the directory that contains the script. For example, if the name of the installer is cspVersion:
      root@host:~/# cd cspVersion
    3. Run the script to create the central deployment directory, then run the script again to create the regional deployment directory.
      root@host:~/cspVersion# ./create_deployment_env.sh central

      This action creates a directory called /deployments/central in the home directory of the installer host.

      root@host:~/cspVersion# ./create_deployment_env.sh regional

      This action creates a directory called /deployments/regional in the home directory of the installer host.

    Customizing the Roles Configuration File

    In the roles file, you specify network-specific settings that you configured for Contrail OpenStack. The installer uses the settings in this file to configure the Cloud CPE Solution. The roles file is located in deployments/regional and deployments/central directories and is in YAML format.

    To customize the roles configuration file for central and regional directories:

    1. Log in as root to the host on which you deployed the installer.
    2. Access the central directory that contains the roles file. For example, if the name of the installer is cspVersion:
      root@host:~/# cd cspVersion/deployments/central
      root@host:~/# /cspVersion/deployments/central#
    3. Open the file roles.conf with a text editor.
    4. Search for the text string [Fill_Value] to find settings that you need to specify.
    5. In the keystone section, specify the following values that you configured for the Openstack Keystone:
      • For the centralized deployment:

        Note: For a centralized deployment, you can view the Keystone settings on the primary Contrail configure and control node in /etc/contrail/keystonerc and /etc/contrail/openstackrc files.

        • ip_address—IP address of management interface of primary Contrail configure and control node
        • service_token—Contrail OpenStack service token value
        • admin_tenant—Contrail OpenStack administrator tenant value
        • admin_name—Contrail OpenStack administrator name
        • admin_password—Contrail OpenStack administrator password
      • For the distributed deployment:

        Note: For a distributed deployment, you can view Keystone settings on the central infrastructure node in /etc/keystone/keystonerc and /etc/keystone/keystonerc files.

        • ip_address—IP address of central infrastructure node or server.
        • service_token—Standalone Keystone service token value. To generate the service token run the openssl rand -hex 10 command.

          For example, 4adea0595f23d5467348

        • admin_tenant—Standalone Keystone administrator tenant value

          For example, admin

        • admin_name—Standalone Keystone administrator name

          For example, admin

        • admin_password—Standalone Keystone administrator password

          For example, passw0rd

    6. In the Contrail Analytics section, specify the following value:
      • contrail_analytics_ip—For a centralized deployment, specify the IP address of the controller node (VIP or controller node).

        For a distributed deployment, specify the IP address of Contrail Analytics host.

        contrail_analytics_ip: “192.16.0.0/16”
    7. In the flannel configuration, specify the following value:
      • Network—Classless Interdomain Routing (CIDR) of the overlay network for Kubernetes. This CIDR must belong to a private network, as described in RFC1918. Be sure that you do not use this range of IP addresses in the deployment.

        For example,

        Network: “172.16.0.0/16”
    8. In the SIM cluster section, specify the IP address of the SIM cluster.
      • sim_cluster—IP address of the VM that contains the SIM cluster

        For example:

        sim_cluster: 192.10.0.0
    9. In the high availability proxy configuration, specify the high availability proxy name.
      • common_name—High availability common proxy name. This field is used for generating Secure Sockets Layer (SSL) certificate for high availability proxy configuration. This must be the IP address through which you want to access the UI.

        Note: If microservices are deployed on a physical server, you can specify the common name value as the management IP address of csp-central-ms. If you do not specify this parameter, at the time of deployment the IP address of csp-central-msvm is automatically acquired.

    10. Save the roles.config file for the central directory.
    11. Copy the roles.config file from the cspVersion/deployments/central directory to the cspVersion/deployments/regional directory. Execute the following command:

      root@host:~/# cp /cspVersion/deployments/central/roles.conf /cspVersion/deployments/regional/roles.conf

      For distributed deployment, because the keystone is deployed in the central directory, you must specify the centralized keystone IP address in the roles.config file.

    12. In the regional roles.config file, specify the regional high availability proxy name.
      • common_name—High availability common proxy name. his field is used for generating Secure Sockets Layer (SSL) certificate for high availability proxy configuration. This must be the IP address through which you want to access the UI.

        Note: If microservices are deployed on a physical server, you can specify the common name value as the management IP address of csp-regional-ms. If you do not specify this parameter, at the time of deployment the IP address of csp-regional-msvm is automatically acquired.

    13. Save the roles.config file for the regional directory.

    Customizing the Topology Configuration File for the Cloud CPE Solution Installation

    The Cloud CPE Solution installer uses a configuration file, which you must customize for your network. After you customize the configuration file, you run the installer to deploy and configure the software on the servers and VMs. An example configuration file is located in the confs directory of the installer and has the name csp_topology_example.conf. The configuration file is in .ini format.

    To customize the configuration file for central and regional directories:

    1. Log in as root to the host on which you deployed the installer.
    2. Access the directory that contains the example configuration file. For example, if the name of the installer is cspVersion:
      root@host:~/# cd /cspVersion/confs/topology

      The following example configuration files are available at cspVersion/confs/topology/:

      • topology_example_prod_nonha_central.conf—Use this file for a central production environment.
      • topology_example_prod_nonha_regional.conf—Use this file for a regional production environment.
      • topology_example_demo_nonha_central.conf—Use this file for a central demonstration (demo) environment.
      • topology_example_demo_nonha_regional.conf—Use this file for a regional demo environment.
    3. Make a copy of the example configuration file in central and regional directories, and name it topology.conf.

      For example:

      root@host:~/cspVersion/deployments/central# cp /cspVersion/confs/topology/topology_example_prod_nonha_central.conf topology.conf .
      root@host:~/cspVersion/deployments/regional# cp /cspVersion/confs/topology/topology_example_prod_nonha_regional.conf topology.conf .
    4. In the [TARGETS] section, specify the following values for the network on which the Cloud CPE Solution resides.
      • installer_ip—IP address of the management interface of the host on which you deployed the installer.
      • ntp_servers—Comma-separated list of Network Time Protocol (NTP) servers in your domain. For networks within firewalls, specify NTP servers specific to your network.
      • servers—Comma-separated list of names of servers:

        For example,

        • Central server: csp-central-infravm
        • Regional server: csp-regional-infravm
    5. Specify the following configuration values for servers that you specified in Step 4.
      • management_address—IP address of the Ethernet management interface

        For example, management_address = 192.168.1.4/24

      • username—Username for logging in to the machine. Always specify root.
      • password—Root password for logging in to the machine.

        For example, password = password

      • keystone—Specify the keystone role for central (csp-central-infravm or csp-central-vm) servers.

        The keystone parameter is applicable only for a distributed deployment.

      • swift—Specify the swift role for central (csp-central-infravm or csp-central-vm) servers.

        The swift parameter is applicable only for a distributed deployment.

    6. Save the file.

    The following examples show customized configuration files for Contrail Service Orchestration installations in production and test environments.

    Sample Configuration File for a Central Contrail Service Orchestration Installation in a Production Environment

    [TARGETS]
    
    installer_ip =
    
    # The ntp server to sync all machines to. If you are within a firewall, provide ntp_servers specific to your network
    
    ntp_servers = ntp.juniper.net
    
    # The desired timezone. By default will use installer timezone for all nodes.
    # Specify different timezone if nodes need timezone different from installer timezone. https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
    
    # timezone = America/Los_Angeles
    
    
    # List of server present in topology. Each of them should be configured further in their own section
    # Only the servers which are included in this list will be provisioned
    # with the corresponding roles.
    # Each server must have hostname (fqdn) set correctly as specified in this config file. 'hostname -f' must return the correct fqdn.
    
    
    servers = controller1-host, csp-central-infravm, csp-central-ms
    
    # Contrail controller host, if there is multiple controllers then repeat the below controller topology and add in above servers list
    # The heat resources defintion used by NSO will upload on given contrail controllers.
    
    [controller1-host]
    management_address = 172.100.1.1/19
    hostname = controller1.example.net
    username = root
    password = passw0rd
    roles = contrail_openstack
    
    
    [csp-central-infravm]
    management_address = 192.168.1.4/24
    hostname = central-infravm.example.net
    username = root
    password = passw0rd
    roles = elasticsearch, cassandra, zookeeper, mariadb, rabbitmq, elk_kibana, elk_logstash, redis, dms
    
    
    [csp-central-ms]
    management_address = 192.168.1.2/24
    hostname = cso-central-host.example.net
    username = root
    password = passw0rd
    roles = haproxy_confd, etcd, kubemaster, kubeminion, sim_cluster, sim_client

    Sample Configuration File for a Regional Contrail Service Orchestration Installation in a Production Environment

    [TARGETS]
    
    installer_ip =
    
    # The ntp server to sync all machines to. If you are within a firewall, provide ntp_servers specific to your network
    
    ntp_servers = ntp.juniper.net
    
    
    # The desired timezone. By default will use installer timezone for all nodes.
    # Specify different timezone if nodes need timezone different from installer timezone. https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
    
    # timezone = America/Los_Angeles
    
    
    
    # List of server present in topology. Each of them should be configured further in their own section
    # Only the servers which are included in this list will be provisioned
    # with the corresponding roles.
    # Each server must have hostname (fqdn) set correctly as specified in this config file. 'hostname -f' must return the correct fqdn.
    
    
    servers = csp-regional-infravm, csp-regional-ms, csp-contrail-analytics-vm
    
    [csp-regional-infravm]
    management_address = 192.168.1.5/24
    hostname = regionalinfravm.example.net
    username = root
    password = passw0rd
    roles = elasticsearch, cassandra, zookeeper, rabbitmq, elk_kibana, elk_logstash, redis
    
    
    [csp-regional-ms]
    management_address = 192.168.1.3/24
    hostname = cso-regional-host.example.net
    username = root
    password = passw0rd
    roles = haproxy_confd, etcd, kubemaster, kubeminion, sim_client
    
    [csp-contrail-analytics-vm]
    management_address = 192.168.1.9/24
    hostname = canvm.example.net
    username = root
    password = passw0rd
    roles = contrail_analytics

    Sample Configuration File for a Central Contrail Service Orchestration Installation in a Demo Environment

    [TARGETS]
    
    installer_ip =
    
    # The ntp server to sync all machines to. If you are within a firewall, provide ntp_servers specific to your network
    
    ntp_servers = ntp.juniper.net
    
    # The desired timezone. By default will use installer timezone for all nodes.
    # Specify different timezone if nodes need timezone different from installer timezone. https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
    
    # timezone = America/Los_Angeles
    
    
    # List of server present in topology. Each of them should be configured further in their own section
    # Only the servers which are included in this list will be provisioned
    # with the corresponding roles.
    # Each server must have hostname (fqdn) set correctly as specified in this config file. 'hostname -f' must return the correct fqdn.
    
    
    servers = controller1-host, csp-central-infravm, csp-central-msvm
    
    # Contrail controller host, if there is multiple controllers then repeat the below controller topology and add in above servers list
    # The heat resources defintion used by NSO will upload on given contrail controllers.
    
    [controller1-host]
    management_address = 172.100.1.1/19
    hostname = controller1.example.net
    username = root
    password = passw0rd
    roles = contrail_openstack
    
    
    [csp-central-infravm]
    management_address = 192.168.1.4/24
    hostname = centralinfravm.example.net
    username = root
    password = passw0rd
    roles = elasticsearch, cassandra, zookeeper, mariadb, rabbitmq, elk_kibana, elk_logstash, redis, dms
    
    
    [csp-central-msvm]
    management_address = 192.168.1.5/24
    hostname = centralmsvm.example.net
    username = root
    password = passw0rd
    roles = haproxy_confd, etcd, kubemaster, kubeminion, sim_cluster, sim_client

    Sample Configuration File for a Regional Contrail Service Orchestration Installation in a Demo Environment

    [TARGETS]
    
    installer_ip =
    
    # The ntp server to sync all machines to. If you are within a firewall, provide ntp_servers specific to your network
    
    ntp_servers = ntp.juniper.net
    
    
    # The desired timezone. By default will use installer timezone for all nodes.
    # Specify different timezone if nodes need timezone different from installer timezone. https://en.wikipedia.org/wiki/List_of_tz_database_time_zones
    
    # timezone = America/Los_Angeles
    
    
    
    # List of server present in topology. Each of them should be configured further in their own section
    # Only the servers which are included in this list will be provisioned
    # with the corresponding roles.
    # Each server must have hostname (fqdn) set correctly as specified in this config file. 'hostname -f' must return the correct fqdn.
    
    
    servers = csp-regional-infravm, csp-regional-msvm, csp-contrail-analytics-vm
    
    [csp-regional-infravm]
    management_address = 192.168.1.6/24
    hostname = regionalinfravm.example.net
    username = root
    password = passw0rd
    roles = elasticsearch, cassandra, zookeeper, rabbitmq, elk_kibana, elk_logstash, redis
    
    
    [csp-regional-msvm]
    management_address = 192.168.1.7/24
    hostname = regionalmsvm.example.net
    username = root
    password = passw0rd
    roles = haproxy_confd, etcd, kubemaster, kubeminion, sim_client
    
    [csp-contrail-analytics-vm]
    management_address = 192.168.1.11/24
    hostname = canvm.example.net
    username = root
    password = passw0rd
    roles = contrail_analytics

    Deploying Infrastructure Services

    Before you begin, customize the Roles configuration file and the Topology configuration file.

    To deploy infrastructure services:

    1. Log in as root to the host on which you deployed the installer.
    2. Deploy infrastructure services on the central and regional directories.

      • For centralized deployment you can deploy infrastructure components on central and regional directories simultaneously.
        root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
        root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"
      • For a distributed deployment, you must first deploy infrastructure components on the central directory and then on the regional directory.
        root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_infra_services.sh"
        root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_infra_services.sh"

    Deploying Microservices

    To deploy microservices in central and regional directories:

    1. Log in as root to the host on which you deployed the installer.
    2. Access the directory that contains the example configuration files. For example, if the name of the installer directory is cspVersion:
      root@host:~/# cd cspVersion/confs/
                              

      The following microservices example configuration files are available at /confs/:

      • micro_services_example_central.conf—Use this file for configuring central microservices.
      • micro_services_example_regional.conf—Use this file for configuring regional microservices.
    3. Copy the each example configuration file to the relevant deployment directory and rename to file to micro_services.conf.
      root@host:~/cspVersion/deployments/central# cp micro_services_example_central.conf micro_services.conf
      root@host:~/cspVersion/deployments/regional# cp micro_services_example_central.conf micro_services.conf
    4. Validate the Temp-URL-Key parameter.

      Perform the following steps in central and regional directories:

      1. Log in to the respective central and regional infrastructure services.

        The infrastructure service for central and regional directories are csp-central-infravm and csp-regional-infravm, respectively.

      2. Run the following commands to check if the value for temporary keys are set.
        root@host:~/# source /root/keystonerc
        root@host:~/# swift -v stat

        In the output, verify if temporary keys values are set.

      3. If temporary keys values are not set, run the following commands to set the values:
        root@host:~/# source /root/keystonerc
        root@host:~/# swift post -m "Temp-URL-Key:mykey"
        root@host:~/#swift post -m "Temp-URL-Key-2:mykey2"
    5. Run the following commands to deploy the microservices for central and regional directories:
      • For a new deployment, first run the script for the central microservices and, when it completes, run the script for the regional microservices.
      • If you are replacing an existing deployment, you can run the central and regional scripts simultaneously.
      root@host:~/# -run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh"
      root@host:~/# -run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh"
    6. (Optional) Restart all pods.
      root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh --restart_containers"
      root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh --restart_containers"
    7. (Optional) Reset entire cluster and clear the database.
      root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh --reset_cluster"
      root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh --reset_cluster"
    8. (Optional) Restart containers and clear the database.
      root@host:~/# run "DEPLOYMENT_ENV=central ./deploy_micro_services.sh --reset_databases"
      root@host:~/# run "DEPLOYMENT_ENV=regional ./deploy_micro_services.sh --reset_databases"
    9. Check the status of central and regional microservices.
      root@host:~/# kubectl get pods | grep -v Running
      # kubectl get pods
       NAME                                      READY     STATUS                     RESTARTS   AGE
       csp.ams-3909406435-4yb0l                  1/1       CrashLoopBackOff            0          8m
       csp.nso-core-3445362165-s55x8             0/1       Running                     0          8m
      

      If you notice the status of a microservice as CrashLoopBackOff or Terminating, you must delete and restart the pod.

      For example, the status of csp.ams-3909406435-4yb0l is CrashLoopBackOff. After you delete and restart the csp.nso-core pod, the csp.ams-3909406435-4yb0l microservice is up and running.

      root@host:~/# kubectl delete pods -l microservice=csp.nso-core
      root@host:~/#kubectl get pods
      NAME                                      READY        STATUS                     RESTARTS   AGE
       csp.ams-4890899323-3dfd02                   1/1        Running                     0          1m
       csp.nso-core-09009278633-fr234f             0/1        Running                     0          1m

    Loading Data

    Before you begin, ensure that microservices are up and running.

    You must load data to import plug-ins and data design tools.

    To load data:

    1. Execute the # cd # ./load_services_data.sh command.

    Note: You must not execute load_services_data.sh more than once after a new deployment.

     
     

    Modified: 2016-10-12