Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Tracing the vRouter Packet Path

Contrail Networking vRouter is the component that takes packets from virtual machines (VM)s and forwards them to their destinations. Tracing is a useful tool for debugging the packet path.

In this topic, we trace the vRouter packet path in the following use cases:

Unicast Packet Path - Intra-VN

This procedure steps through debugging the unicast packet path for intra-virtual network (intra-VN) traffic from VM1 to VM2 (on same compute node) and VM3 (on different compute node). In this example, the VMs listed are in the same subnet 10.1.1.0/24. Intra-VN traffic is within the same virtual network.

VM1

IP address 10.1.1.5/32 (Compute 1)

VM2

IP address 10.1.1.6/32 (Compute 1)

VM3

IP address 10.1.1.7/32 (Compute 2)

Intra-Compute Use Case

  1. Discover the vif interfaces corresponding to the virtual machine interfaces (VMI)s of the VM by using the command:

    You can also discover the vif interfaces by entering the introspect URL.

    Example:

    Note:

    Replace the IP address with the actual compute IP address in the introspect HTTP URL.

  2. Run the vif --get <index> command to verify the virtual routing and forwarding (VRF) and Policy flags are set in the vRouter interface (VIF).

    Example output verifying flags for each vif:

  3. Run the following command to display all of the entries from the bridge table:

    Example:

    Highlighted in the example is the destination MAC address of the destination VM in the bridge table and the next-hop identifier associated with it.

  4. Run nh --get <nh id> to display the next-hop details.

    Example:

    In the example, Oif:6 is the OIF index in the next hop which is the outgoing interface for the packet. The Encap Data corresponds to the L2 encapsulation that is added to the IP packet before the packet is forwarded to the outgoing interface.

  5. Run vif --get <oifindex> to get the outgoing VIF details.

    Example:

    The received packet RX and transmitted packet TX counters for the corresponding VIF interfaces are incremented when traffic is flowing through.

  6. Run the flow -l command to list the flows created. If the Policy flag is enabled on the VIFs, a flow is created as shown in the example.

    Example: Ping 10.1.1.6 from 10.1.1.5.

    The statistics in the forward and reverse flow are incremented when traffic is flowing through. If statistics are not getting incremented for a particular flow, that can indicate a potential problem in that direction. The flow action should be F or N for the packets to be forwarded or NATed out. A flow action of D indicates that packets will be dropped.

  7. Run the vrouter_agent_debug script to collect all of the relevant logs.

Inter-Compute Use Case

In an inter-compute case, the next-hop lookup points to the tunnel that takes the packet to the other compute node. The bridge entry will also indicate the Label/VNID added to the packet during encapsulation. Inter-compute traffic is between VMs on different compute nodes.

For Compute 1:

  1. Discover the vif interfaces corresponding to the virtual machine interfaces (VMI)s of the VM by using the command:

    You can also discover the vif interfaces by entering the introspect URL:

    Example:

    Note:

    Replace the IP address with the actual compute IP address in the introspect HTTP URL.

  2. Run the vif --get <index> command to verify the virtual routing and forwarding (VRF) and Policy flags are set in the vRouter interface (VIF).

    Example output verifying flags for each vif:

  3. Run the following command to display all of the entries from the bridge table:

    Example:

    In the example, 2:99:ef:64:96:e1 belongs to IP address 10.1.1.7 and label 27 is used to encapsulate the packet.

  4. Run nh --get <nh id> to get the next hop details.

    Example:

    In the example, the next-hop output indicates the next-hop type as Tunnel, encapsulation used as MPLSoGRE, the outgoing interface as Oif:0, and the corresponding source and destination IP addresses of the tunnel.

For Compute 2:

  1. Run the mpls --get <label> command to see the next hop mapped to the particular incoming MPLS table.

    Example:

  2. Run nh --get <nh_id> to get the next hop details.

    Example:

  3. Run vif --get <oifindex> to get the outgoing VIF details.

    Example:

    Note:

    If you are using VXLAN encapsulation, do the following on Compute 2:

    1. For Step 1, instead of running the mpls --get command, run the vxlan --get <vxlanid> command to get the mapping from VXLAN ID to the next hop.

    2. With VXLAN, the next hop points to a VRF translated next hop. Use the bridge lookup in the corresponding VRF, as shown in Step 3 to get the final outgoing next hop, which will point to the VIF interface.

Unicast Packet Path - Inter-VN

The following procedure steps through debugging the packet path from VM1 to VM2 (on the same compute node) and VM3 (on a different compute node). In this example, the virtual machines (VMs) listed are in the same subnet 10.1.1.0/24.

VM1

IP address 10.1.1.5/32 (Compute 1)

VM2

IP address 20.1.1.6/32 (Compute 1)

VM3

IP address 20.1.1.5/32 (Compute 2)

Note:

Replace the IP address with the actual compute IP address in all of the introspect URLs.

Intra-Compute Use Case

  1. Discover the vif interfaces corresponding to the virtual machine interfaces (VMI)s of the VM using the command:

    You can also discover the vif interfaces by entering the introspect URL:

    Example:

    Note:

    Replace the IP address with the actual compute IP address in the introspect HTTP URLs.

  2. Run the vif --get <index> command to verify the virtual routing and forwarding (VRF) and Policy flags are set in the vRouter interface (VIF).

    Example output verifying flags for each vif:

  3. Run the following command to display all of the entries from the bridge table:

    Example:

    In the case of inter-virtual network (VN)s, the packets are Layer 3 routed instead of Layer 2 switched. The vRouter does a proxy ARP for the destination network providing it’s virtual MAC address 0:0:5e:0:1:0 back for the ARP request from the source. This can be seen from the rt –dump of the source VN inet table. This results in the packet being received by the vRouter, which does the route lookup to send the packet to the correct destination.

  4. Run nh --get <nh id> to display the next-hop details.

    Example:

  5. Run rt --dump 2 --family inet | grep <ip address> to display inet family routes on the specified IP address.

    Example:

  6. Run nh --get <nh id> to get the next hop details.

    Example:

  7. Run vif --get <oifindex> to get the outgoing VIF details.

    Example:

Inter-Compute Use Case

In the case of inter-compute, the next hop looked up to send the packet out, will point to a tunnel next hop. Depending on the encapsulation priority, the appropriate encapsulation is added and the packet is tunneled out. Inter-compute traffic is between VMs on different compute nodes.

For Compute 1:

  1. Run rt --dump 2 --family inet | grep <ip address> to display inet family routes for a specified IP address.

    Example:

  2. Run nh --get <nh id> to display the next-hop details, which points to a tunnel next hop.

    Example:

For Compute 2:

  1. Run the mpls --get <label> command to see the next hop mapped to the particular incoming MPLS table.

    Example:

  2. Run nh --get <nh id> to view the next hop details.

    Example:

    In the example, Oif:4 is the OIF index in the next hop which is the outgoing interface for the packet. The Encap Data corresponds to the L2 encapsulation that is added to the IP packet before the packet is forwarded to the outgoing interface.

  3. Run vif --get <oifindex> to get the outgoing VIF details.

    Example:

    For details about EVPN type 5 routing in Contrail Networking, see Support for EVPN Route Type 5.

Broadcast, Unknown Unicast, and Multicast Packet Path

The following procedure steps through debugging the packet path for broadcast, unknown unicast, and multicast (BUM) traffic in Contrail Networking. In this example, the virtual machines (VMs) listed are in the same subnet 70.70.70.0/24.

The ToR Service Node (TSN) actively holds the contrail-tor-agent and is responsible for:

  1. Acting as a receiver of all BUM traffic coming from the ToR switch.

  2. Acting as DNS/DHCP responder for the BMS connected to the ToR switch.

Contrail Networking releases earlier than 5.x, used an Open vSwitch Database (OVSDB)-managed VXLAN environment.

Topology example for an OVSDB-managed VXLAN:

  • Top-of-Rack Switch 1 (ToR SW1) - 10.204.74.229 (lo0.0 = 1.1.1.229)

  • Top-of-Rack Switch 2 (ToR SW2) - 10.204.74.230 (lo0.0 = 1.1.1.230)

  • ToR Services Node 1 (TSN1) = 10.219.94.7

  • ToR Services Node 2 (TSN2) = 10.219.94.8

  • Controller1 = 10.219.94.4

  • Controller2 = 10.219.94.5

  • Controller3 = 10.219.94.6

  • Compute1 = 10.219.94.9

  • Compute2 = 19.219.94.18

  • Virtual Network (VN) = 70.70.70.0/24

  • Virtual Machine 1 (VM1) = 70.70.70.3 residing on Compute2

  • Virtual Machine 2 (VM2) = 70.70.70.5 residing on Compute1

  • Bare Metal Server 1 (BMS1) = 70.70.70.100

  • Bare Metal Server 2 (BMS2) = 70.70.70.101

  1. Run the set protocols ovsdb interfaces <interface> command to configure the physical interfaces that you want the OVSDB protocol to manage.

    Example:

    The ToR interfaces from which the BMS hangs are marked as ovsdb interfaces.

  2. View packets coming into these interfaces by displaying the ovsdb mac table for the ToR switch.

    Example:

    The entry marked in red (ff:ff:ff:ff:ff:ff:ff - broadcast route) indicates the next hop for a BUM packet coming into the ToR SW’s ovsdb interface. In this case, VTEP address 10.219.94.7 is the next hop, which is TSN1. This changes based on which TSN has the active contrail-tor-agent for the ToR switch in question. With this, the BUM packet is forwarded to the TSN node in a VXLAN tunnel (local VTEP source interface is 1.1.1.229 and RVTEP source interface is 10.219.94.7).

    The VXLAN encapsulated packet is sent with a VXLAN Network Identifier (VNI) that is predetermined by Contrail Networking when logical interfaces are created. For example, when ge-0/0/46 was configured as a logical port in Contrail Networking, the following configuration was committed on the ToR.

    Example:

    As the VXLAN encapsulated packet arrives on the TSN node, let’s examine how the vRouter handles this packet.

  3. Run vxlan --dump to dump the VXLAN table. The VXLAN table maps a network ID to a next hop.

    Example:

    In the example, next hop 13 is programmed for VNI 4.

  4. Run nh --get <nh id> to display the next-hop details and determine the virtual routing and forwarding (VRF) associated.

    Example:

  5. Run the following command to display all of the entries from the bridge table:

    Example:

    In the example bridge table, since we are tracing the BUM packet path, we need to examine the ff:ff:ff:ff:ff:ff:ff route by selecting the next hop programmed. In the example, it is 24. Note that a series of composite next hops are programmed.

  6. Run nh --get <nh id> to display the next-hop details.

    Example:

    The multicast tree in the example shows that there are two Dynamic IPs (DIP)s. The DIP where the packet came from is ignored. Therefore, packet gets forwarded to DIP 10.219.94.18 only.

  7. Run vxlan --get <vnid> to examine what DIP 10.219.94.18 does with the incoming VXLAN encapsulated packet.

    Example:

  8. Run nh --get <nh id> to display the next-hop details.

    Example:

  9. Run the following command to display all of the entries from the bridge table:

    Example:

    In the example bridge table, since we are tracing the BUM packet path, we need to examine the ff:ff:ff:ff:ff:ff:ff route by selecting the next hop programmed. In the example, it is 50.

  10. Run nh --get <nh id> to display the next-hop details.

    Example:

    In the example, you only have to inspect DIP 10.219.94.9. The remaining endpoints are either local or the source where the BUM traffic came from. Now, let us examine, what DIP 10.219.94.9 does with the incoming VXLAN encapsulated packet.

  11. Run vxlan --get <vnid> to examine what DIP 10.219.94.9 does with the incoming VXLAN encapsulated packet.

    Example:

  12. Run nh --get <nh id> to display the next-hop details.

    Example:

  13. Display the bridge table for the VRF by using the following command:

    Example:

  14. Run nh --get <nh id> to display the next-hop details.

    Example:

    From the above output, the only DIP that you have to further examine is 10.219.94.8. The remaining DIPs are either local or the source where the BUM traffic came from. Now, let’s examine what DIP 10.219.94.8 does with the incoming VXLAN encapsulated packet.

  15. Run vxlan --get <vnid> to examine what DIP 10.219.94.9 does with the incoming VXLAN encapsulated packet.

    Example:

  16. Run nh --get <nh id> to display the next-hop details.

    Example:

  17. Display the bridge table for the VRF by using the following command:

    Example:

  18. Run nh --get <nh id> to display the next-hop details.

    Example:

    Now, you just have one DIP 1.1.1.230 which is the ToR SW2 in the topology. This should also be present in the multicast tree as this ToR SW also has an end-point (which is BMS2) in the same VN (VNI=4) as the one we are tracing.

This completes all levels of forwarding and tracing the BUM packet from one ToR switch and is replicated to other intended receivers in the topology.

These multicast trees are programmed by the controllers that the TSN is connected to. If you want to inspect the controller’s memory and what eventually gets programmed on all TSN computes, enter the following introspect URL using your controller IP address: