Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Contrail Service Chaining with CSRX

 

Service chaining is the concept of forwarding traffic through multiple network entities in a certain order, and each network entity performs specific functions, such as firewall, IPS, NAT, LB, etc. The legacy way of doing service chaining would be to use standalone HW appliances, but this makes service chaining inflexible, expensive, and lengthens set up times. In dynamic service chaining network functions are deployed as a VM or a container and can be chained automatically in a logical way. For example, Figure 1 uses Contrail for service chaining between two pods in two different networks using a cSRX container Level 4 – Level 7 firewall to secure the traffic between them.

Figure 1: Service Chaining
Network topology diagram showing Left Ubuntu System in subnet 10.10.10.0/24 with IP .1, CSRX1 connecting two subnets, and Right Ubuntu System in subnet 10.20.20.0/24 with IP .1.
Note

Left and right networks are used here just for simplicity’s sake, to follow the flow from left to right, but you can use your own names of course. Make sure to configure the network before you attach a pod to it or else the pod will not be created.

Bringing Up Client and CSRX Pods

Let’s create two virtual networks using this YAML file:

Verify using Kubectl:

It’s good practice to confirm that these two networks are now in Contrail before proceeding. From the Contrail UI, select Configure > Networking > Networks > default-domain > k8s-default is shown in Figure 2, which focuses on the left network.

Note

If you use the default namespace in the YAML file for a network, it will create it in the domain default-domain and project k8s-default.

Figure 2: Confirming the Creation of Two Networks
Screenshot of a cloud networking interface showing Kubernetes network configurations and details for k8s-vn-left-pod-network with CIDR 10.10.10.0/24.

Create Client Pods

Now let’s create two Ubuntu Pods, one in each network using the following annotation object:

Create cSRX Pod

Now create a Juniper cSRX container that has one interface on the left network and one interface on the right network, using this YAML file:

Confirm that the interface placement is in the correct network:

Note

Each container has one interface belonging to cluster-wide-default network regardless of the use of the annotations object because the annotations object above creates, and puts one extra interface in, a specific network.

Verify PodIP

To verify the podIP, log in to the left pord, right Pod and the cSRX to confirm the IP/MAC addresses:

Note

Unlike other pods the cSRX didn’t acquire IP with DHCP, and it starts with the factory default configuration hence it needs to be configured.

Note

By default, cSRX eth0 is visible only from the shell and used for management. When attaching networks, the first attached network is mapped to eth1, which is GE-0/0/1, and the second attached is mapped to eth2, which is GE-0/0/0.

Configure cSRX IP

Configure this basic setup on the cSRX. To assign the correct IP address, use the MAC/IP address mapping from the kubectl describe pod command output as well as to configure the default security policy to allow everything for now:

Verify the IP address assigned on the cSRX:

A ping test on the left pod would fail as there is no route:

Add a static route to the left and right pods and then try to ping again:

The ping still failed, as we didn’t create the service chaining, which will also take care of the routing. Let’s see what happened to our packets:

There’s no session on the cSRX. To troubleshoot the ping issue, log in to the compute node cent22 that hosts this container to dump the traffic using TShark and check the routing. To get the interface linking the containers:

Note that vif0/3 and vif0/4 are bound with the right pod and both linked to tapeth0-89a4e2 and tapeth1-89a4e2 respectively. The same goes for the left pod for Vif0/5 and vif0/6 while vif0/7, vif 0/8, and vif0/9 are bound with the cSRX1. From this you can also see the number of the packet/bytes that hit the interface, as well as the VRF. VRF 3 is for the default-cluster-network, while VRF 6 is for the left network and VRF 5 is for the right network. In Figure 10.3 you can see the interface mapping from all the perspectives (container, Linux , vr-agent).

Figure 3: Interface Mapping
Network topology diagram showing connectivity between Left-Ubuntu-CS, CSRX1 router, and Right-Ubuntu-CS. Left-Ubuntu-CS in subnet 10.10.10.0/24 connects via Eth1. CSRX1 connects to both Left VN and Right VN with IP ending .2. Right-Ubuntu-CS in subnet 10.20.20.0/24 connects via Eth0.

Let’s try to ping from the left pod to the right pod again, and use TShark on the tap interface for the right pod for further inspection:

Looks like the ping isn’t reaching the right pod at all, let’s check the cSRX’s left network tap interface:

We can see the packet but there is nothing in the cSRX security prospective to drop this packet

Check the routing table of the left network VRF by logging to the vrouter_vrouter-agent_1 container in the compute node:

Note that 6 is the routing table VRF of the left network; the same would go for the right network VRF routing table but there is a missing route:

So, even if all the pods are hosted on the same compute nodes, they can’t reach each other. And if these pods are hosted on some different compute nodes then you have a bigger problem to solve. Service chaining isn’t just about adjusting the routes on the containers but also about exchanging routes between the vRouter-agent between the compute nodes regardless of the location of the pod (as well as adjusting that automatically if the pod moved to another compute node). Before labbing service-chaining let’s address an important concern for network administrators who are not fans of this kind of CLI troubleshooting… you can do the same troubleshooting using the Contrail Controller GUI.

From the Contrail Controller UI, select Monitor > Infrastructure > Virtual Routers and then select the node that hosts the pod, in our case cent22.local, as shown in the next screen capture, Figure 4.

Figure 4: Contrail Controller GUI in Action
Network management interface showing virtual router cent22.local with selected Interfaces tab. Displays interface details like Name, Label, Status, Type, Network, IP Address, and Instance.

Figure 4 shows the Interface tab, which is equivalent to running the vif -l command on the vrouter_vrouter-agent-1 container, but it shows even more information. Notice the mapping between the instance ID and the tap interface naming, where the first six characters of the instance ID are always reflected in the tap interface naming.

We are GUI cowboys. Let’s check the routing tables of each VRF by moving to the Routes tab and selecting the VRF you want to see, as in Figure 5.

Figure 5: Checking the Routing Tables of each VRF
Network monitoring interface for virtual routers, showing routing info for cent22.local with options for VRF selection, route types, and search.

Select the left network. The name is longer because it includes the domain (and project). You can confirm there is no 10.20.20.0/24 prefix from the right network. You can also check the MAC address learned in the left network by selecting L2, the GUI equivalent to the rt--dump 6 --family bridge command.

Service Chaining

Now let’s utilize the cSRX to service chaining using the Contrail Command GUI. Service chaining consists of four steps that need to be completed in order:

  1. Create a service template;
  2. Create a service instance based on the service template just completed;
  3. Create a network policy and select the service instance you created before;
  4. Apply this network policy onto the network.
Note

Since Contrail Command GUI is the best solution to provide a single point of management for all environments, we will use it to build service changing. You can still use the normal Contrail controller GUI to build service chaining, too.

First let’s log in to Contrail Command GUI (in our setup https://10.85.188.16:9091/) as shown next in Figure 7, and then select Service > Catalog > Create as shown in Figure 8.

Figure 6: Log in to Contrail Command
Contrail Command interface for network management showing menu options, dashboard overview with 1 analytics node, 1 config node, 5 instances, 4 virtual networks, and a graph of analytics messages over time.
Figure 7: Create a New Service
Contrail Command interface for creating a VNF Service Template with fields for Name, Version, and Virtualization Type.

Insert a name of a services template, here myweb-cSRX-CS, then chose v2 and virtual machine for service modes. Choose In-network and Firewall as service types as shown in Figure 9.

Figure 8: Choosing Service Types
Contrail Command interface for creating a VNF service template with fields for name, version, virtualization type, service mode, and service type.

Next select Management, Left and Right, and then click Create.

Figure 9: Create Service
User interface for configuring network interfaces with dropdowns set to management, left, and right. Features expand and delete icons. Create and Cancel buttons at the bottom.

Now, select Deployment and click on the Create button to create the service instances as shown next in Figure 11.

Figure 10: Deploy Service Instance
User interface for configuring a VNF Service Instance in Juniper's Contrail Command platform with tabs for Service Instances, Tags, and Permissions; fields for Name, Service Template, and Interface Type; and buttons to Create or Cancel.

Name this service instance, then select from the drop-down menu the name of the template you created before you chose the proper network from the prospective of the cSRX being the instance (container in that case) that will do the service chaining. Click on the port tuples to expand it as shown in Figure 12. Then, for each of the three interfaces bind one interface of the cSRX, then click Create.

Figure 11: Expanding the Port Tuples
User interface for creating a VNF service instance with fields for name, service template, interface types, port tuples, service health checks, routing policies, route aggregates, allowed address pairs, and static routes.
Note

The name of the VM interface isn’t shown in the drop-down menu, instead it’s the instance ID. You can identify that from the tap interface name as we mentioned before. In other words, all you have to know is the first six characters for any interface belonging to that container. All the interfaces in a given instance (VM or container) share the same first characters.

Before proceeding, make sure the statuses of the three interfaces are up and they are showing the correct IP address of the cSRX instance as shown in Figure 13.

Figure 12: All Interfaces Up and Running
Contrail Command UI showing VNF Service Instance myweb-CSRX-SI with status Spawning and UUID dcfb8b02-cc3d-4356-978e-0c8116b89947. Networks: Management k8s-default-pod-network, Left k8s-vn-left-pod-network, Right k8s-vn-right-pod-network. Three active interfaces with IPs 10.20.20.2, 10.47.255.248, 10.10.10.2. User in default-domain and k8s-default project as Admin.

To create the network policy go to Overlay > Network Policies > Create as in Figure 14.

Figure 13: Create Network Policy
Contrail Command UI for managing Juniper Networks' virtualized environments, highlighting Network Policies in the sidebar with options for monitoring, creating, and deleting policies.

Name your network policy, then in the first rule add left network as the source network and right network as the destination with the action of pass.

Figure 14: Source and Destination
User interface of Contrail Command's Create Network Policy section showing policy named left-right networks. Main panel includes policy rules with action set to pass, protocol to ANY, source type to Network, and source dropdown open. Sidebar lists navigation options like Virtual Networks and Network Policies.

Select the advanced option and attach the service instance to the one you created before, then click the Create button.

Figure 15: Attaching the Service Instance
User interface for creating a network policy in Juniper's Contrail Command with policy named left-right networks. Rules set to pass action for any protocol with bidirectional traffic between k8s-vn-left-pod-network and k8s-vn-right-pod-network. Advanced options enabled. No mirroring options selected. Create and Cancel buttons available.

To attach this network policy to the network click on Virtual Network in the left-most column and select the left network and edit.

Figure 16: Attach the Policy to the Network
Contrail Command UI showing Virtual Networks section with sidebar menu and main panel listing networks by name, interfaces, instances, subnets, and VPGs. Action buttons for creating, editing, and deleting networks are present.

In Network Policies select the network policy you just created from the drop-down menu list, and then click Save. Do the same for the right network.

Figure 17: Save the Network Policy
Screenshot of a web-based network management interface for editing a virtual network, featuring navigation menu, main panel titled Edit Virtual Network with Network tab selected, and Network Configuration details including name k8s-vn-right-pod-network, subnet settings, and action buttons Save and Cancel.

Verify Service Chaining

Now let’s verify the effect of this service changing on routing. From the Contrail Controller module control node (http://10.85.188.16:8143 in our setup), select Monitor > Infrastructure > Virtual Router then select the node that hosts the pod , in our case Cent22.local, then select the Routes tab and select the left VRF.

Figure 18: Verify Service Chaining
Screenshot of a network interface showing routing for virtual router cent22.local. Displays routing table with next hop types, details, prefixes like 10.10.10.0/24. Navigation menu on left includes Infrastructure and Networking.

You can see that the right network host routes have been leaked to the left network (10.20.20.1/32, 10.20.20.2/32 in this case).

Now let’s ping the right pod from the left pod to see the session created on the cSRX:

Security Policy

Create a security policy on the cSRX to allow only HTTP and HTTPS:

The ping fails because the policy on the cSRX drops it:

And in the cSRX we can see the session creation: