Validation Factory
Validation Factory provides a framework to test and validate Juniper Cloud-Native Router deployments. It simplifies the evaluation of Cloud-Native Router solutions for customer adoption.
Overview
Validation Factory provides a library of well-defined test profiles that validate basic layer 3 features, including common sanity and performance tests for Cloud-Native Router deployments.
The aim of Validation Factory is to reduce the number of tests you need to manually create and execute when evaluating or qualifying a Cloud-Native Router deployment for a production environment.
Test execution is automated using a Kubernetes custom resource (CR) and the test result is published in a user-friendly format.
Validation Factory is supported for Wind River deployments only.
Table 1 shows the supported features.
Supported Features |
Description |
---|---|
Supported topology |
We recommend that you validate with five single-node clusters. |
Supported routing protocols |
BGP, OSPF, IS-IS |
Supported tests |
MPLS-over-UDP, SR-MPLS, SRv6 |
Table 2 shows the Validation Factory components.
Component |
Description |
---|---|
Test Profile Library |
A python-based, well-defined collection of test profiles for basic layer 3 Cloud-Native Router functionality. Each profile specifies:
The following test cases are executed:
|
Kubernetes Operator |
The framework is implemented as a Kubernetes custom resource (CR). The operator performs the following tasks:
|
Test Pods |
Test pods execute specific test profiles. The pods contain the necessary tools and libraries for network performance testing. The operator dynamically provisions the pods based on the requested test profile. |
Results |
The results are stored as a filesystem volume on the Kubernetes cluster. |
Test Topology Manifest
The test topology manifest describes the part of your network that you want to test. Table 3 shows the main configuration parameters.
Parameter |
Description |
|||
---|---|---|---|---|
apiVersion |
Set to
|
|||
kind |
Set to |
|||
metadata |
||||
name |
The name you want to call this test topology manifest. |
|||
namespace |
Set to |
|||
spec |
||||
global | ||||
platform |
Set to |
|||
|
auth | |||
secret |
The name of the secret that contains the kubeconfigs of all cluster nodes. |
|||
|
crpd | |||
username |
Username to log in to cRPD via SSH. |
|||
|
password |
Password to log in to cRPD via SSH. |
||
|
cluster |
An array of Cloud-Native Router clusters to be tested. We support the following:
|
||
|
name |
Name of the cluster. |
||
kubeconfigpath |
Path to the kubeconfig file on a node in the specified cluster. |
|||
|
nodes |
An array of Cloud-Native Router nodes in the cluster. |
||
|
name |
Name of the node. |
||
ip |
IP address of the node. |
|||
jumphost |
(Optional) IP address of the jumphost to access the node. |
|||
connections |
An array of connections between the specified nodes as per the test topology. You're describing how your nodes are connected together. For example, if you have five nodes connected in a full mesh, then this array will contain ten connections. |
|||
name |
Name of the connection. |
|||
|
node1 | |||
name |
Name of the node at one end of the connection. |
|||
interface |
Name of the fabric interface on that node. |
|||
node2 |
||||
name |
Name of the node at the other end of the connection. |
|||
interface |
Name of the fabric interface on that node. |
|||
|
protocols |
The underlay protocols running on this connection (link). Set to the same value for all connections. Valid values depend on the type of test you're running:
|
||
tunnels1 |
An array of tunnels for MPLS-over-UDP. Omit this section if you're not testing MPLS-over-UDP. |
|||
|
name |
Name of the tunnel. |
||
type |
Tunnel type. Set to |
|||
|
node1 | |||
|
name |
Name of the node at one end of the tunnel. |
||
node2 |
||||
name |
Name of the node at the other end of the tunnel. |
|||
segmentroutings1 | An array of segment routing paths for SR-MPLS and SRv6. Omit this section if you're not testing SR-MPLS or SRv6. | |||
name |
Name of the segment routing path. |
|||
type |
Path type. Set to |
|||
endpoints |
The pair of endpoints for this path. For example: endpoints: - jcnr-node5 - jcnr-node6 |
|||
transit |
An array of transit hops for SRv6 paths. For example: transit: - jcnr-node7 - jcnr-node8 |
|||
1 Each test topology manifest is limited to only one
type of test:
|
Here's a sample two-node test topology manifest:
apiVersion: testtopology.validationfactory.juniper.net/v1 kind: TestTopology metadata: name: twonode-mplsoverudp namespace: jcnr spec: global: platform: windriver auth: secret: sshauth crpd: username: root password: <password> cluster: - name: cluster1 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node5 ip: 10.108.33.135 - name: cluster2 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node6 ip: 10.108.33.136 connections: - name: toDC1 node1: name: jcnr-node5 interface: ens1f2 node2: name: jcnr-node6 interface: ens1f2 protocols: - isis tunnels: - name: tun1 type: mpls-over-udp node1: name: jcnr-node5 node2: name: jcnr-node6
Here's a sample five-node test topology manifest:
apiVersion: testtopology.validationfactory.juniper.net/v1 kind: TestTopology metadata: name: fiveonode-mplsoverudp namespace: jcnr spec: global: platform: windriver auth: secret: sshauth crpd: username: root password: <password> cluster: - name: PE1 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node14 ip: 10.204.8.14 - name: PE2 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node16 ip: 10.204.8.16 - name: P1 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node13 ip: 10.204.8.13 - name: P2 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node12 ip: 10.204.8.12 - name: P3 kubeconfigpath: /etc/kubernetes/admin.conf nodes: - name: jcnr-node15 ip: 10.204.8.15 connections: - name: PE1andP1 node1: name: jcnr-node14 interface: ens1f0np0 node2: name: jcnr-node13 interface: ens1f0 protocols: - isis - name: P1toPE2 node1: name: jcnr-node13 interface: ens1f1 node2: name: jcnr-node16 interface: ens2f0 protocols: - isis - name: PE1andP2 node1: name: jcnr-node14 interface: ens1f1np1 node2: name: jcnr-node12 interface: ens1f1 protocols: - isis - name: P2andP3 node1: name: jcnr-node12 interface: ens2f0 node2: name: jcnr-node15 interface: ens2f0 protocols: - isis - name: P3toPE2 node1: name: jcnr-node15 interface: ens2f1 node2: name: jcnr-node16 interface: ens2f1 protocols: - isis - name: P1toP2 node1: name: jcnr-node13 interface: ens1f2 node2: name: jcnr-node12 interface: ens1f0 protocols: - isis - name: P1toP3 node1: name: jcnr-node13 interface: ens1f3 node2: name: jcnr-node15 interface: ens1f0 protocols: - isis tunnels: - name: tun1 type: mpls-over-udp node1: name: jcnr-node14 node2: name: jcnr-node16
Execute the Test Profiles
You can run the Validation Factory tests on any host that has access to the clusters you want to test. Typically, however, you would run the tests on the installation host in one of those clusters. The installation host is where you installed Helm and where you ran the Cloud-Native Router installation for that cluster.
All clusters must have Cloud-Native Router installed, and all nodes in all clusters must allow the same login credentials (username and password or SSH key).