Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Validation Factory

Validation Factory provides a framework to test and validate Juniper Cloud-Native Router deployments. It simplifies the evaluation of Cloud-Native Router solutions for customer adoption.

Overview

Validation Factory provides a library of well-defined test profiles that validate basic layer 3 features, including common sanity and performance tests for Cloud-Native Router deployments.

The aim of Validation Factory is to reduce the number of tests you need to manually create and execute when evaluating or qualifying a Cloud-Native Router deployment for a production environment.

Test execution is automated using a Kubernetes custom resource (CR) and the test result is published in a user-friendly format.

Note:

Validation Factory is supported for Wind River deployments only.

Table 1 shows the supported features.

Table 1: Validation Factory Supported Features

Supported Features

Description

Supported topology

  • Cloud-Native Router deployed in two single-node clusters

  • Cloud-Native Router deployed in five single-node clusters

We recommend that you validate with five single-node clusters.

Supported routing protocols

BGP, OSPF, IS-IS

Supported tests

MPLS-over-UDP, SR-MPLS, SRv6

Table 2 shows the Validation Factory components.

Table 2: Components

Component

Description

Test Profile Library

A python-based, well-defined collection of test profiles for basic layer 3 Cloud-Native Router functionality. Each profile specifies:

  • Test-Description: A clear explanation of the functionality or behavior being tested.

  • Test Parameters: Configurable parameters to tailor the test profiles to specific scenarios.

  • Pass and Fail Criteria: Defined metrics or conditions that determine success or failure of the test case.

The following test cases are executed:

  • End-to-end IPv4 & IPv6 traffic between pods. Both kernel & DPDK interfaces are included.
  • Restart routing process on cRPD.
  • Restart of pod running test traffic.
  • Respawn of the cRPD pod.
  • Respawn of the vRouter agent pod.
  • Respawn of the vRouter DPDK pod.

Kubernetes Operator

The framework is implemented as a Kubernetes custom resource (CR). The operator performs the following tasks:

  • Manages the test custom resource definitions (CRDs).

  • Parses the CR upon creation.

  • Orchestrates the test execution process by bringing up containerized test pods.

  • Monitors test pod execution and collects results.

  • Publishes test results in a user-friendly format.

Test Pods

Test pods execute specific test profiles. The pods contain the necessary tools and libraries for network performance testing. The operator dynamically provisions the pods based on the requested test profile.

Results

The results are stored as a filesystem volume on the Kubernetes cluster.

Test Topology Manifest

The test topology manifest describes the part of your network that you want to test. Table 3 shows the main configuration parameters.

Table 3: Main Test Topology Parameters

Parameter

Description

apiVersion

   

Set to testtopology.validationfactory.juniper.net/v1.

kind

   

Set to TestTopology.

metadata

     

name

The name you want to call this test topology manifest.

namespace

Set to jcnr.

spec

   

global  

  platform

Set to windriver.

  auth

    secret

The name of the secret that contains the kubeconfigs of all cluster nodes.

  crpd

      username

Username to log in to cRPD via SSH.

    password

Password to log in to cRPD via SSH.

cluster  

An array of Cloud-Native Router clusters to be tested. We support the following:

  • two clusters, with each cluster consisting of a single (worker) node

  • five clusters, with each cluster consisting of a single (worker) node

  name

Name of the cluster.

  kubeconfigpath

Path to the kubeconfig file on a node in the specified cluster.

  nodes

An array of Cloud-Native Router nodes in the cluster.

    name

Name of the node.

      ip

IP address of the node.

    jumphost  

(Optional) IP address of the jumphost to access the node.

  connections  

An array of connections between the specified nodes as per the test topology.

You're describing how your nodes are connected together. For example, if you have five nodes connected in a full mesh, then this array will contain ten connections.

    name

Name of the connection.

  node1

    name

Name of the node at one end of the connection.

      interface

Name of the fabric interface on that node.

node2

 

name

Name of the node at the other end of the connection.

interface

Name of the fabric interface on that node.

  protocols

The underlay protocols running on this connection (link). Set to the same value for all connections.

Valid values depend on the type of test you're running:

  • If the tunnel type is mpls-over-udp, then set to either bgp, ospf, or isis.

  • If the segment routing path type is sr-mpls, then set to both isis and mpls. For example:

    protocols:
       - isis
       - mpls
  • If the segment routing path type is srv6, then set to isis.

tunnels1

An array of tunnels for MPLS-over-UDP. Omit this section if you're not testing MPLS-over-UDP.

  name

Name of the tunnel.

    type

Tunnel type. Set to mpls-over-udp.

  node1  

    name

Name of the node at one end of the tunnel.

node2

name

Name of the node at the other end of the tunnel.

  segmentroutings1 An array of segment routing paths for SR-MPLS and SRv6. Omit this section if you're not testing SR-MPLS or SRv6.
    name  

Name of the segment routing path.

    type  

Path type. Set to sr-mpls or srv6.

    endpoints

The pair of endpoints for this path. For example:

endpoints:
   - jcnr-node5
   - jcnr-node6
    transit

An array of transit hops for SRv6 paths. For example:

transit:
   - jcnr-node7
   - jcnr-node8

1 Each test topology manifest is limited to only one type of test: mpls-over-udp, sr-mpls, or srv6.

  • If you're testing MPLS-over-UDP, then include the tunnels section but omit the segmentroutings section.

  • If you're testing SR-MPLS or SRv6, then include the segmentroutings section but omit the tunnels section.

  • Within the segmentroutings section, the type must be the same for all paths. You cannot create a test that has a mix of SR-MPLS and SRv6 paths.

Here's a sample two-node test topology manifest:

Here's a sample five-node test topology manifest:

Execute the Test Profiles

You can run the Validation Factory tests on any host that has access to the clusters you want to test. Typically, however, you would run the tests on the installation host in one of those clusters. The installation host is where you installed Helm and where you ran the Cloud-Native Router installation for that cluster.

All clusters must have Cloud-Native Router installed, and all nodes in all clusters must allow the same login credentials (username and password or SSH key).

  1. Download the Validation Factory software package to the installation host in one of the clusters that you want to test. The installation host is where you have Helm installed and where you ran the Cloud-Native Router installation for that cluster.

    You can download the Cloud-Native Router Validation Factory software package from the Juniper Networks software download site. See Cloud-Native Router Software Download Packages.

  2. Gunzip and untar the software package.
  3. Load the provided images on all nodes in the cluster. The images are located in the downloaded package.
  4. If desired, configure the port where you want the test results to be accessible.
    Look for the following line in validation-factory/values.yaml and change the port number to your desired value:
  5. Install the Helm chart.
  6. Create a secret with the login credentials and kubeconfigs of your clusters and apply it.
    valfac-secrets.yaml:
    name The name you want to call this secret.
    key The base64-encoded ssh key that allows username to log in to every node. If you specify the key, then you don't specify the password.
    username The base64-encoded username to log in to every node.
    password The base64-encoded password for username to log in to every node. If you specify the password, then you don't specify the key.
    <cluster1_name>-kubeconfig The base64-encoded kubeconfig file of the first cluster. The order that you list your clusters is not important, but <cluster1_name> must match the name of one of your clusters.
    <cluster2_name>-kubeconfig The base64-encoded kubeconfig file of the second cluster. <cluster2_name> must match the name of one of the remaining clusters.
    <clusterN_name>-kubeconfig The base64-encoded kubeconfig file of the Nth cluster. <clusterN_name> must match the name of one of the remaining clusters.
    Note:

    You'll need to base64-encode most of the required information in the secrets manifest.

    • To base64-encode a file: base64 -w0 <file>

    • To base64-encode a string: echo -n <string> | base64 -w0

      Copy the output to the respective locations in the secrets manifest.

    Apply the secret:
  7. Configure the test topology manifest. See Test Topology Manifest.
  8. Apply the manifest to begin test execution.
    For example:
  9. View the test results.

    View the test results at http://<cluster_node_IP>:<node_port>/<test_topology_name>/<test_topology_name>.html, where <node_port> is the value you set in step 4.

    For example, if you executed the test profile named fivenode-mplsoverudp on a cluster with node IP address 10.0.0.100 and node port 30000, you'll be able to see the results at http://10.0.0.100:30000/fivenode-mplsoverudp/fivenode-mplsoverudp.html.