Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Contrail Insights Installation for Kubernetes

 

Architecture and Terminology

Kubernetes cluster nodesPrimary and worker nodes of the Kubernetes cluster being monitored by Contrail Insights. These nodes will run the Contrail Insights Agent.
Contrail Insights Platform nodeNode on which Contrail Insights Platform components will be installed. Should be able to reach the Kubernetes cluster nodes.

Figure 1 shows the different components of Contrail Insights and how they interact with the Kubernetes cluster.

Figure 1: Contrail Insights and Kubernetes Workflow
Contrail Insights and Kubernetes Workflow

Requirements

The following are the requirements for installing Contrail Insights for Kubernetes.

  • Kubernetes versions 1.8 and later.

  • See Contrail Insights General Requirements for hardware and software requirements.

  • API access to the Kubernetes API server. Contrail Insights reads information about the cluster from the API server. The token provided during configuration must provide sufficient permission for read-only API calls. Contrail Insights Platform must also be able to open a connection to the host and port on which the API server runs.

  • Note

    Upgrade notice: Starting with Contrail Insights 3.2.6, the requirement for a license file is removed. If you are installing a version earlier than 3.2.6, a license is required prior to installation.

    You can obtain a license key from APPFORMIX-KEY-REQUEST@juniper.net. Provide the following information in your request:

Workflow in Four Steps

The installation consists of the following steps:

  1. Initial setup.

  2. Configuring Kubernetes.

  3. Installing Contrail Insights.

  4. Setting up the Contrail Insights Scheduler Extender.

Initial Setup

Perform the following steps for the initial setup:

  1. Install the following required files on the Contrail Insights Platform node.
    Note

    For RHEL, the following iptables rule is needed to access port 9000:

  2. Edit the /etc/hosts/ file on the Contrail Insights Platform node and enter the IP addresses of the Kubernetes cluster nodes.
  3. Set up passwordless SSH between the Contrail Insights Platform node and the Kubernetes cluster nodes. Run the following commands to generate and copy the SSH public keys to all the nodes:

Configure Kubernetes

Contrail Insights reads information about resources in your Kubernetes clusters. The software requires the cluster-admin role or another role that provides read-only access to all objects in the cluster. We recommend that you create a new Service Account for Contrail Insights and assign it the cluster-admin role.

If you do not create a new Service Account, then you must provide the token from an existing Service Account that has the required access during the configuration of Contrail Insights.

To create a new Service Account with the required access for Contrail Insights, perform the following steps in the Kubernetes cluster:

  1. Create a YAML file with the following:
  2. Create the appformix Service Account using the file you created in Step 1:
  3. Confirm that the Service Account has been created. Make a note of its namespace as you’ll need this later.
  4. Add the cluster-admin role to the appformix Service Account, substituting <namespace> for the namespace noted in Step 3:
  5. Run the following command to confirm that the appformix Service Account has the required access:

    The output of the command should be yes.

  6. Contrail Insights must be configured to communicate with the Kubernetes cluster. Get the following details from the Kubernetes cluster to use during the Contrail Insights installation.
    kubernetes_cluster_urlThis is the URL of the Kubernetes API Server. To get this value, run the following command on the Kubernetes cluster:
    kubernetes_auth_tokenThis is the authentication token of the appformix Service Account. To get this value, run the following commands on the Kubernetes cluster:

Install Contrail Insights

To install Contrail Insights:

  1. Download the Contrail Insights installation packages from software downloads to the Contrail Insights Platform node. Get the following files:

    If you are installing a version earlier than 3.2.6, copy the Contrail Insights license file to the Contrail Insights Platform node.

  2. Unzip contrail-insights-<version>.tar.gz. This package contains all the Ansible playbooks required to install Contrail Insights.
    Note

    The remaining steps should be executed from within the contrail-insights-<version>/ directory. Although the product name changed from "AppFormix" to "Contrail Insights," the UI and internal command paths continue to show AppFormix and will reflect the new name at a later date.

  3. Using sample_inventory as a template, create an inventory file for the installation. List the Kubernetes cluster nodes in the compute section and the Contrail Insights Platform node in the appformix_controller section.
  4. Create a directory called group_vars. Create a file named all inside this directory with configuration variables required by Contrail Insights.

    If you are installing a version earlier than 3.2.6, include the path to the Contrail Insights license file in group_vars/all:

    Note

    Deprecation Notice: The appformix_mongo_cache_size_gb parameter previously available starting in Contrail Insights 2.19.5 is now deprecated and no longer supported from Contrail Insights 3.2.0 and going forward. Starting with Contrail Insights version 3.2.0, Mongo will be configured to use a maximum of 40 percent of the available memory on the Contrail Insights Platform nodes.

  5. To enable network device monitoring in the cluster, include the following in the group_vars/all file:
  6. Run the Ansible playbook.

    Playbook should run to completion without any errors.

  7. Log into the Contrail Insights Dashboard at:
  8. Log in using the tokenId from the following file on the Contrail Insights Platform node:

Set up the Contrail Insights Scheduler Extender

Contrail Insights comes with a Scheduler Extender module that can be added to the Kubernetes scheduler. With this module in place, the Kubernetes scheduler will use user-defined SLA policies in addition to its default policies to determine where to schedule a pod in the cluster.

To set up the Scheduler Extender:

  1. Create a JSON file describing the Contrail Insights Scheduler Extender. Place this file inside /etc/kubernetes on the Kubernetes primary node.
  2. Add the extender to the kube-scheduler on the primary node by adding the --policy-config-file option to the spec.containers.command block:
  3. Update the kube-scheduler container by restarting the kubelet service on the primary node.

    The kube-scheduler is now running with the Contrail Insights Scheduler Extender.

    By default, Kubernetes does not allow any user pods to be scheduled on the primary node. To really see the Contrail Insights Scheduler Extender in action on a 3-node Kubernetes cluster, enable scheduling on the Kubernetes primary node with the following command:

Using the Contrail Insights SLA Profile for Scheduling

Contrail Insights ships with a default Scheduling SLA that includes alarms for missed heartbeat, high CPU load, and high memory usage.

To change the profiles in the Scheduling SLA, do the following:

  1. Select Settings from the list in the top right of the Dashboard, then select SLA Settings > Scheduling.
    Figure 2: Contrail Insights Settings in Dashboard
    Contrail Insights Settings in Dashboard
  2. Click Delete Profile to delete the existing profile.
    Figure 3: Delete Scheduling Profile
    Delete Scheduling Profile
  3. Click Add New Rule and define a new alarm.
    Figure 4: Add New Rule in Scheduling Profile
    Add New Rule in Scheduling Profile
  4. Select the newly created alarm from the list of available alarms and click Create Profile. You can add several alarms with custom weights to the SLA profile.
    Figure 5: Create Profile in Scheduling SLA
    Create Profile in Scheduling SLA
  5. To see the Scheduler Extender in action, generate some load on one of the Kubernetes cluster nodes so that the Scheduling SLA is violated. Check the status of the SLA from the Alarms page.
    Figure 6: Violated Scheduling SLA in Alarms page
    Violated Scheduling SLA in Alarms page

    Then create some pods on the Kubernetes cluster and check which nodes they get scheduled on. The node that is violating the scheduling SLA will not get any new pods scheduled on it.