Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Contrail Service Chaining with CSRX

 

Service chaining is the concept of forwarding traffic through multiple network entities in a certain order, and each network entity performs specific functions, such as firewall, IPS, NAT, LB, etc. The legacy way of doing service chaining would be to use standalone HW appliances, but this makes service chaining inflexible, expensive, and lengthens set up times. In dynamic service chaining network functions are deployed as a VM or a container and can be chained automatically in a logical way. For example, Figure 1 uses Contrail for service chaining between two pods in two different networks using a cSRX container Level 4 – Level 7 firewall to secure the traffic between them.

Figure 1: Service Chaining
Service Chaining
Note

Left and right networks are used here just for simplicity’s sake, to follow the flow from left to right, but you can use your own names of course. Make sure to configure the network before you attach a pod to it or else the pod will not be created.

Bringing Up Client and CSRX Pods

Let’s create two virtual networks using this YAML file:

Verify using Kubectl:

It’s good practice to confirm that these two networks are now in Contrail before proceeding. From the Contrail UI, select Configure > Networking > Networks > default-domain > k8s-default is shown in Figure 2, which focuses on the left network.

Note

If you use the default namespace in the YAML file for a network, it will create it in the domain default-domain and project k8s-default.

Figure 2: Confirming the Creation of Two Networks
Confirming
the Creation of Two Networks

Create Client Pods

Now let’s create two Ubuntu Pods, one in each network using the following annotation object:

Create cSRX Pod

Now create a Juniper cSRX container that has one interface on the left network and one interface on the right network, using this YAML file:

Confirm that the interface placement is in the correct network:

Note

Each container has one interface belonging to cluster-wide-default network regardless of the use of the annotations object because the annotations object above creates, and puts one extra interface in, a specific network.

Verify PodIP

To verify the podIP, log in to the left pord, right Pod and the cSRX to confirm the IP/MAC addresses:

Note

Unlike other pods the cSRX didn’t acquire IP with DHCP, and it starts with the factory default configuration hence it needs to be configured.

Note

By default, cSRX eth0 is visible only from the shell and used for management. When attaching networks, the first attached network is mapped to eth1, which is GE-0/0/1, and the second attached is mapped to eth2, which is GE-0/0/0.

Configure cSRX IP

Configure this basic setup on the cSRX. To assign the correct IP address, use the MAC/IP address mapping from the kubectl describe pod command output as well as to configure the default security policy to allow everything for now:

Verify the IP address assigned on the cSRX:

A ping test on the left pod would fail as there is no route:

Add a static route to the left and right pods and then try to ping again:

The ping still failed, as we didn’t create the service chaining, which will also take care of the routing. Let’s see what happened to our packets:

There’s no session on the cSRX. To troubleshoot the ping issue, log in to the compute node cent22 that hosts this container to dump the traffic using TShark and check the routing. To get the interface linking the containers:

Note that vif0/3 and vif0/4 are bound with the right pod and both linked to tapeth0-89a4e2 and tapeth1-89a4e2 respectively. The same goes for the left pod for Vif0/5 and vif0/6 while vif0/7, vif 0/8, and vif0/9 are bound with the cSRX1. From this you can also see the number of the packet/bytes that hit the interface, as well as the VRF. VRF 3 is for the default-cluster-network, while VRF 6 is for the left network and VRF 5 is for the right network. In Figure 10.3 you can see the interface mapping from all the perspectives (container, Linux , vr-agent).

Figure 3: Interface Mapping
Interface Mapping

Let’s try to ping from the left pod to the right pod again, and use TShark on the tap interface for the right pod for further inspection:

Looks like the ping isn’t reaching the right pod at all, let’s check the cSRX’s left network tap interface:

We can see the packet but there is nothing in the cSRX security prospective to drop this packet

Check the routing table of the left network VRF by logging to the vrouter_vrouter-agent_1 container in the compute node:

Note that 6 is the routing table VRF of the left network; the same would go for the right network VRF routing table but there is a missing route:

So, even if all the pods are hosted on the same compute nodes, they can’t reach each other. And if these pods are hosted on some different compute nodes then you have a bigger problem to solve. Service chaining isn’t just about adjusting the routes on the containers but also about exchanging routes between the vRouter-agent between the compute nodes regardless of the location of the pod (as well as adjusting that automatically if the pod moved to another compute node). Before labbing service-chaining let’s address an important concern for network administrators who are not fans of this kind of CLI troubleshooting… you can do the same troubleshooting using the Contrail Controller GUI.

From the Contrail Controller UI, select Monitor > Infrastructure > Virtual Routers and then select the node that hosts the pod, in our case cent22.local, as shown in the next screen capture, Figure 4.

Figure 4: Contrail Controller GUI in Action
Contrail
Controller GUI in Action

Figure 4 shows the Interface tab, which is equivalent to running the vif -l command on the vrouter_vrouter-agent-1 container, but it shows even more information. Notice the mapping between the instance ID and the tap interface naming, where the first six characters of the instance ID are always reflected in the tap interface naming.

We are GUI cowboys. Let’s check the routing tables of each VRF by moving to the Routes tab and selecting the VRF you want to see, as in Figure 5.

Figure 5: Checking the Routing Tables of each VRF
Checking
the Routing Tables of each VRF

Select the left network. The name is longer because it includes the domain (and project). You can confirm there is no 10.20.20.0/24 prefix from the right network. You can also check the MAC address learned in the left network by selecting L2, the GUI equivalent to the rt--dump 6 --family bridge command.

Service Chaining

Now let’s utilize the cSRX to service chaining using the Contrail Command GUI. Service chaining consists of four steps that need to be completed in order:

  1. Create a service template;
  2. Create a service instance based on the service template just completed;
  3. Create a network policy and select the service instance you created before;
  4. Apply this network policy onto the network.
Note

Since Contrail Command GUI is the best solution to provide a single point of management for all environments, we will use it to build service changing. You can still use the normal Contrail controller GUI to build service chaining, too.

First let’s log in to Contrail Command GUI (in our setup https://10.85.188.16:9091/) as shown next in Figure 7, and then select Service > Catalog > Create as shown in Figure 8.

Figure 6: Log in to Contrail Command
Log in to Contrail
Command
Figure 7: Create a New Service
Create a New Service

Insert a name of a services template, here myweb-cSRX-CS, then chose v2 and virtual machine for service modes. Choose In-network and Firewall as service types as shown in Figure 9.

Figure 8: Choosing Service Types
Choosing Service Types

Next select Management, Left and Right, and then click Create.

Figure 9: Create Service
Create Service

Now, select Deployment and click on the Create button to create the service instances as shown next in Figure 11.

Figure 10: Deploy Service Instance
Deploy Service Instance

Name this service instance, then select from the drop-down menu the name of the template you created before you chose the proper network from the prospective of the cSRX being the instance (container in that case) that will do the service chaining. Click on the port tuples to expand it as shown in Figure 12. Then, for each of the three interfaces bind one interface of the cSRX, then click Create.

Figure 11: Expanding the Port Tuples
Expanding the Port
Tuples
Note

The name of the VM interface isn’t shown in the drop-down menu, instead it’s the instance ID. You can identify that from the tap interface name as we mentioned before. In other words, all you have to know is the first six characters for any interface belonging to that container. All the interfaces in a given instance (VM or container) share the same first characters.

Before proceeding, make sure the statuses of the three interfaces are up and they are showing the correct IP address of the cSRX instance as shown in Figure 13.

Figure 12: All Interfaces Up and Running
All Interfaces
Up and Running

To create the network policy go to Overlay > Network Policies > Create as in Figure 14.

Figure 13: Create Network Policy
Create Network Policy

Name your network policy, then in the first rule add left network as the source network and right network as the destination with the action of pass.

Figure 14: Source and Destination
Source and Destination

Select the advanced option and attach the service instance to the one you created before, then click the Create button.

Figure 15: Attaching the Service Instance
Attaching the
Service Instance

To attach this network policy to the network click on Virtual Network in the left-most column and select the left network and edit.

Figure 16: Attach the Policy to the Network
Attach the
Policy to the Network

In Network Policies select the network policy you just created from the drop-down menu list, and then click Save. Do the same for the right network.

Figure 17: Save the Network Policy
Save the Network Policy

Verify Service Chaining

Now let’s verify the effect of this service changing on routing. From the Contrail Controller module control node (http://10.85.188.16:8143 in our setup), select Monitor > Infrastructure > Virtual Router then select the node that hosts the pod , in our case Cent22.local, then select the Routes tab and select the left VRF.

Figure 18: Verify Service Chaining
Verify Service Chaining

You can see that the right network host routes have been leaked to the left network (10.20.20.1/32, 10.20.20.2/32 in this case).

Now let’s ping the right pod from the left pod to see the session created on the cSRX:

Security Policy

Create a security policy on the cSRX to allow only HTTP and HTTPS:

The ping fails because the policy on the cSRX drops it:

And in the cSRX we can see the session creation: