Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuring Settings on Host OS

This chapter provides information on tuning of settings on host OS to enable advanced features or to increase the scale of cRPD functionality.

Configuring ARP Scaling

The maximum ARP entry number is controlled by the Linux host kernel. If there are a large number of neighbors, you might need to adjust the ARP entry limitations on the Linux host. There are options in the sysctl command on the Linux host to adjust the ARP or NDP entry limits.

For example, to adjust the maximum ARP entries using IPv4:

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh1=4096

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh2=8192

root@host:~# sysctl -w net.ipv4.neigh.default.gc_thresh3=8192

For example, to adjust the maximum ND entries using IPv6:

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh1=4096

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh2=8192

root@host:~# sysctl -w net.ipv6.neigh.default.gc_thresh3=8192

Configuring OSPFv2/v3

To allow more number of OSPFv2/v3 adjacencies with cRPD, increase the IGMP membership limit:

Increase the IGMP membership limit.

root@host:~# sysctl -w net.ipv4.igmp_max_memberships=1000

Configuring MPLS

To configure MPLS:

  1. Load the MPLS modules in the container using modprobe or insmod :

    root@crpd-ubuntu3:~# modprobe mpls_iptunnel

    root@crpd-ubuntu3:~# modprobe mpls_router

    root@crpd-ubuntu3:~# modprobe ip_tunnel

  2. Verify the MPLS modules loaded in host OS.
  3. After loading the mpls_router on the host, configure the following commands to activate MPLS on the interface.

    root@host:~# sysctl -w net.mpls.platform_labels=1048575

Adding MPLS Routes

To add MPLS routes to host using the iproute2 utility:

  1. Run the following command to add the mpls routes to the host OS.

    root@host:~# ip -f mpls route add 100 as 200/300 via inet 172.20.0.2 dev br-a3a2fe3ae8e3

  2. Run the following command to view the mpls routes.

    root@host:~# ip -f mpls route show

Adding Routes with MPLS label

To add routes to host by encapsulating the packets with MPLS label using the iproute2 utility:

  1. Run the following command to encapsulate the packets to host OS.

    root@host:~# ip route add 172.20.0.0/30 encap mpls 200 via inet 172.20.0.2 dev br-a3a2fe3ae8e3

  2. Run the following command to view the mpls routes.

    root@host:~# ip route show

Creating a VRF device

To instantiate a VRF device and associate it with a table:

  1. Run the following command to create a VRF device.

    root@host:~# ip link add dev test1 type vrf table 11

  2. Run the following command to view the created VRFs.

    root@host:~# ip [-d] link show type vrf

  3. Run the following command to view the list of VRFs in the host OS.

    root@host:~# ip vrf show

Assigning a Network Interface to a VRF

Network interfaces are assigned to a VRF by assigning the netdevice to a VRF device. The connected and local routes are automatically moved to the table associated with the VRF device.

To assign a network interface to a VRF:

Run the following command to assign a interface.

root@host:~# ip link set dev <name> master <name>

root@host:~# ip link set dev eth1 vrf test

Viewing the Devices assigned to VRF

To view the devices:

Run the following command to view the devices assigned to a VRF.

root@host:~# ip link show vrf <name>

root@host:~# ip link show vrf red

Viewing Neighbor Entries to VRF

To list the neighbor entries associated with devices enslaved to a VRF device:

Run the following command to add the primary option to the ip command:

root@host:~# ip -6 neigh show vrf <NAME>

root@host:~# ip neigh show vrf red

root@host:~# ip -6 neigh show vrf red

Viewing Addresses for a VRF

To show addresses for interfaces associated with a VRF:

Run the following command to add the primary option to the ip command:

root@host:~# ip addr show vrf <NAME>

root@host:~# ip addr show vrf red

Viewing Routes for a VRF

To view routes for a VRF:

  1. Run the following command to view the IPv6 routes table associated with the VRF device:

    root@host:~# ip -6 route show vrf NAME

    root@host:~# ip -6 route show table ID

  2. Run the following command to do a route lookup for a VRF device:

    root@host:~# ip -6 route get vrf <NAME> <ADDRESS>

    root@host:~# ip route get 192.0.2.1 vrf red

    root@host:~# ip -6 route get oif <NAME> <ADDRESS>

    root@host:~# ip -6 route get 2001:db8::32 vrf red

  3. Run the following command to view the IPv4 routes in a VRF device:

    root@host:~# ip route list table <table-id>

Removing Network Interface from a VRF

Network interfaces are removed from a VRF by breaking the enslavement to the VRF device

Run the following command to remove the network interface:

root@host:~# ip link set dev NAME nomaster

After removing the network interface, connected routes are moved to the default table and local entries are moved to the local table.

Hash Field Selection for ECMP Load Balancing on Linux

You can select the ECMP hash policy (fib_multipath_hash_policy) for both forwarded and locally generated traffic (IPv4/IPv6).

IPv4 Traffic

  1. By default, Linux kernel uses the Layer 3 hash policy to load balance the IPv4 traffic. Layer 3 hashing uses the following information:
    • Source IP address
    • Destination IP address

    root@host:~# sysctl -n net.ipv4.fib_multipath_hash_policy 0

  2. Run the following command to load balance the IPv4 traffic using Layer 4 hash policy. Layer 4 hashing load balances the traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Source port number
    • Destination port number
    • Protocol

    root@host:~# sysctl -w net.ipv4.fib_multipath_hash_policy=1

    root@host:~# sysctl -n net.ipv4.fib_multipath_hash_policy 1

  3. Run the following command to use Layer 3 hashing on the inner packet header (IPv4/IPv6 over IPv4 GRE)

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=2

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 2

    The policy defaults to Layer 3 hashing on the packet forwarded as described in the default approach for IPv4 traffic.

    IPv6 Traffic

  4. By default, Linux kernel uses Layer 3 hash policy to load balance the IPv6 traffic. The Layer 3 hash policy load balance the traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Flow label
    • Next header (Protocol)

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 0

  5. You can use the Layer 4 hash policy to load balance the IPv6 traffic. The Layer 4 hash policy load balances traffic based on the following information:
    • Source IP address
    • Destination IP address
    • Source port number
    • Destination port number
    • Next header (Protocol)

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=1

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 1

  6. Run the following command to use Layer 3 hashing on the inner packet header (IPv4/IPv6 over IPv4 GRE).

    root@host:~# sysctl -w net.ipv6.fib_multipath_hash_policy=2

    root@host:~# sysctl -n net.ipv6.fib_multipath_hash_policy 2

    MPLS

  7. Linux kernel can select the next-hop of a multipath route using the following parameters:
    • label stack upto the limit of MAX_MP_SELECT_LABELS (4)
    • source IP address
    • destination IP address
    • protocol of the inner IPv4/IPv6 header

    Neighbor Detection

  8. Run the following command to view the liveness (failed/incomplete/unresolved) of the neighbor entry, which helps in forwarding the packets to next-hops.

    root@host:~# sysctl -w net.ipv4.fib_multipath_use_neigh=1

    By default, the packets are forwarded to next-hops using the root@host:~# sysctl -n net.ipv4.fib_multipath_use_neigh 0 command.

wECMP using BGP on Linux

Unequal cost load balancing is a way to distribute traffic unequally among different paths (comprising the multipath next-hop); when the paths have different bandwidth capabilities. BGP protocol achieves this by tagging each route/path with the bandwidth of the link using the link bandwidth extended community. The bandwidth of the corresponding link can be encoded as part of this link bandwidth community. RPD uses this bandwidth information of each path to program the multipath next-hops with appropriate linux::weights. A next-hop with linux::weight allows linux kernel to load balance traffic asymmetrically.

BGP forms a multipath next-hop and uses the bandwidth values of individual paths to find out the proportion of traffic that each of the next-hops that form the ECMP next-hop should receive. The bandwidth values specified in the link bandwidth need not be the absolute bandwidth of the interface. These values need to reflect the relative bandwidth of one path from the another. For details, see Understanding How to Define BGP Communities and Extended Communities and How BGP Communities and Extended Communities Are Evaluated in Routing Policy Match Conditions.

Consider a network with R1 receiving equal cost paths from R2 and R3 to a destination R4; if you want to send 90% of the load balanced traffic over the path R1-R2 and the remaining 10% of the traffic over the path R1-R3 using wECMP, you need to tag routes received from the two BGP peers with link bandwidth community by configuring policy-options.

  1. Configure policy statement.

    root@host> show configuration policy-options

  2. RPD uses the bandwidth values to unequally balance the traffic with the multiple path next-hops.

    root@host> show route 100.100.100.100 detail

  3. Linux kernel supports unequal load balancing by assigning linux::weights for each next-hop.

    root@host:/# ip route show 100.100.100.100

    The linux::weights are programmed to linux as divisions of integer 255 (the maximum value of an unsigned character). Each next-hop in the ECMP next-hop is given a linux::weight proportional to its share of the bandwidth.