Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

CN2 Pipeline Solution Test Architecture and Design

SUMMARY 

Overview

Solution Test Automation Framework (STAF) is a common platform developed for automating and maintaining solution use cases mimicking the real-world production scenarios.

  • STAF can granularly simulate and control user-personas, actions, timing at scale and thereby exposing the software to all real-world scenarios with long-running traffic.

  • STAF architecture can be extended to allow the customer to plug-in GitOps artifacts and create custom test workflows.

  • STAF is implemented in Python and Pytest test framework.

Use Case

STAF emulates Day 0, Day 1, and Day-to-Day operations in a customer environment. Use case tests are performed as a set of test workflows by user-persona. Each user-persona has its own operational scope.

  • Operator—Performs global operations, such as cluster setup and maintenance, CN2 deployment, and so on.

  • Architect—Performs tenant related operations, such as onboarding, teardown, and so on.

  • Site Reliability Engineering (SRE)—Performs operations in the scope of a single tenant only.

    Currently, STAF supports IT cloud webservice and telco use cases.

Workflows

Workflows for each tenant are executed sequentially only. Several tenants’ workflows can be executed in parallel, with the exclusion of Operator tests.

Day 0 operation or CN2 deployment is currently independent from test execution. The rest of the workflows are executed as Solution Sanity Tests. In Pytest each workflow is represented by a test suite.

Figure 1: Typical Use Case Scenario Typical Use Case Scenario

For test descriptions, see CN2 Pipeline Test Case Descriptions.

Profile

Workflow is executed for a use case instance described in a profile YAML file. The profile YAML describes parameters for namespaces, application layers, network policies, service type, and so on.

Figure 2: Profile Workflow Profile Workflow

The profile file is mounted outside of a test container to give you flexibility with choice of scale parameters. For CN2 Pipeline Release 23.1, you can update the total number of pods only.

The complete set of profiles can be accessed from the downloaded CN2 pipeline tar file in folder charts/workflow-objects/templates.

The following sections have examples profiles.

Isolated LoadBalancer Profile

The IsolatedLoadBalancerProfile.yml configures a three-tier webservice profile

  • Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster through the LoadBalancer service.

  • Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These pods are accessible through the ClusterIP service from the frontend pods.

  • Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.

  • Policies are created to allow only specific ports for each tier.

IsolatedLoadbalancerProfile.yml

Isolated NodePort Profile

The IsolatedNodePortProfile.yml configures a three-tier webservice profile.

  • Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using haproxy node port ingress service.

  • Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These pods are accessible through the ClusterIP service from the frontend pods.

  • Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.

  • Policies are created to allow only specific ports for each tier. Isolated namespace is enabled in this profile.

Multi-Namespace Contour Ingress LoadBalancer Profile

The MultiNamespaceContourIngressLB.yml configures a three-tier webservice profile.

  • Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using haproxy node port ingress service.

  • Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These pods are accessible through the ClusterIP service from the frontend pods.

  • Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.

  • Policies are created to allow only specific ports for each tier. Isolated namespace is enabled in this profile.

Multi-Namespace Isolated LoadBalancer Profile

The MultiNamespaceIsolatedLB.yml profile configures a three-tier webservice profile.

  • Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using a LoadBalancer service.

  • Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These middleware pods are accessible through the ClusterIP service from the frontend pods.

  • Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.

  • Policies are created to allow only specific ports for each tier. Isolated namespace is enabled in this profile in addition to multiple namespace for frontend, middleware, and backend deployments.

Non-Isolated Nginx Ingress LoadBalancer Profile

The NonIsolatedNginxIngressLB .yml profile configures a three-tier webservice profile.

  • Frontend pods are launched using deployment with a replica count of two (2). These frontend pods are accessed from outside of the cluster using a NGINX ingress LoadBalancer service.

  • Middleware pods are launched using deployment with a replica count of two (2) and allowed address pair is configured on both the pods. These middleware pods are accessible through the ClusterIP service from the frontend pods.

  • Backend pods are deployed with a replica count of two (2). Backend pods are accessible from middleware pods through the ClusterIP service.

  • Policies are created to allow only specific ports for each tier.

Test Environment Configuration

To configure the test environment, you deploy a YAML file which contains parameters that describe the test execution environment. This topic shows example YAML files for both Kubernetes and OpenShift test environment configuration.

Kubernetes Environment

Following are example YAML files used to deploy and configure a Kubernetes test environment.

CN2 in Kubernetes Environment — No vMX and No VM

The following is an example YAML for configuring a Kubernetes test environment that does not have a Juniper Networks® MX Series 3D Universal Edge Router (vMX) and does not have an external virtual machine (VM) setup in the test environment.

CN2 in Kubernetes Environment — Standard Test Setup

The following YAML is a standard test setup for configuring a Kubernetes test environment.

OpenShift Environment

Following are example YAML files used to deploy and configure a Red Hat OpenShift test environment.

CN2 in OpenShift Environment — No vMX and No VM Setup

The following is an example YAML for configuring an OpenShift test environment that does not have a Juniper Networks® MX Series 3D Universal Edge Router (vMX) and does not have an external virtual machine (VM) setup in the test environment.

CN2 in OpenShift Environment — Standard Test Setup

The following YAML is a standard test setup for configuring an OpenShift test environment.

Kubeconfig File

The kubeconfig file data is used for authentication. The kubeconfig file is stored as a secret on the Argo host Kubernetes cluster.

Required Data:

  • Server: Secret key should point to either server IP address or host name.

    • For Kubernetes setups, point to the master node IP address: server:

    • For OpenShift setups, point to the OpenShift Container Platform API server, extension, and server:

  • Client certificate: Kubernetes client certificate.

Logging and Reporting

Two types of log files are created during each test run:

  • Pytest session log file—One per session

  • Test suite log file—One per test suite

Default file size is 50 MB. Log file rotation is supported.