Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Configure cSRX IP

 

Configure this basic setup on the cSRX. To assign the correct IP address, use the MAC/IP address mapping from the kubectl describe pod command output as well as to configure the default security policy to allow everything for now:

Verify the IP address assigned on the cSRX:

A ping test on the left pod would fail as there is no route:

Add a static route to the left and right pods and then try to ping again:

The ping still failed, as we didn’t create the service chaining, which will also take care of the routing. Let’s see what happened to our packets:

There’s no session on the cSRX. To troubleshoot the ping issue, log in to the compute node cent22 that hosts this container to dump the traffic using TShark and check the routing. To get the interface linking the containers:

Note that vif0/3 and vif0/4 are bound with the right pod and both linked to tapeth0-89a4e2 and tapeth1-89a4e2 respectively. The same goes for the left pod for Vif0/5 and vif0/6 while vif0/7, vif 0/8, and vif0/9 are bound with the cSRX1. From this you can also see the number of the packet/bytes that hit the interface, as well as the VRF. VRF 3 is for the default-cluster-network, while VRF 6 is for the left network and VRF 5 is for the right network. In Figure 10.3 you can see the interface mapping from all the perspectives (container, Linux , vr-agent).

Figure 1: Interface Mapping
Interface Mapping

Let’s try to ping from the left pod to the right pod again, and use TShark on the tap interface for the right pod for further inspection:

Looks like the ping isn’t reaching the right pod at all, let’s check the cSRX’s left network tap interface:

We can see the packet but there is nothing in the cSRX security prospective to drop this packet

Check the routing table of the left network VRF by logging to the vrouter_vrouter-agent_1 container in the compute node:

Note that 6 is the routing table VRF of the left network; the same would go for the right network VRF routing table but there is a missing route:

So, even if all the pods are hosted on the same compute nodes, they can’t reach each other. And if these pods are hosted on some different compute nodes then you have a bigger problem to solve. Service chaining isn’t just about adjusting the routes on the containers but also about exchanging routes between the vRouter-agent between the compute nodes regardless of the location of the pod (as well as adjusting that automatically if the pod moved to another compute node). Before labbing service-chaining let’s address an important concern for network administrators who are not fans of this kind of CLI troubleshooting… you can do the same troubleshooting using the Contrail Controller GUI.

From the Contrail Controller UI, select Monitor > Infrastructure > Virtual Routers and then select the node that hosts the pod, in our case cent22.local, as shown in the next screen capture, Figure 2.

Figure 2: Contrail Controller GUI in Action
Contrail
Controller GUI in Action

Figure 2 shows the Interface tab, which is equivalent to running the vif -l command on the vrouter_vrouter-agent-1 container, but it shows even more information. Notice the mapping between the instance ID and the tap interface naming, where the first six characters of the instance ID are always reflected in the tap interface naming.

We are GUI cowboys. Let’s check the routing tables of each VRF by moving to the Routes tab and selecting the VRF you want to see, as in Figure 3.

Figure 3: Checking the Routing Tables of each VRF
Checking
the Routing Tables of each VRF

Select the left network. The name is longer because it includes the domain (and project). You can confirm there is no 10.20.20.0/24 prefix from the right network. You can also check the MAC address learned in the left network by selecting L2, the GUI equivalent to the rt--dump 6 --family bridge command.