Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

How to Configure BGP Labeled Unicast Egress Peer Traffic Engineering on Ingress Using cRPD

 

This example shows how to configure egress peer traffic engineering using BGP labeled unicast. Egress peer traffic engineering allows a central controller to instruct an ingress router in a domain to direct traffic towards a specific egress router and a specific external interface to reach a destination out of the network.

In this example, the ingress functionality is run on a Linux device that is hosting Juniper Networks containerized routing protocol process (cRPD).

Requirements

This example uses the following hardware and software components. See Figure 1 for reference.

  • Linux VM (H0), that simulates the data center server and hosts the cRPD docker container

  • Linux server running:

    • Linux OS Ubuntu 18.04

    • Linux Kernel 4.15

    • Docker Engine 18.09.1

  • Other nodes in topology are vMX routers. Running Junos Release 19.2R2.2

    • R0 is a vMX, that simulates the ASBR PR1

    • R1 is a separate vMX, that simulates the rest of the routers (ToR, RR, P, Peer, U1) as logical systems

Overview

Topology

Figure 1 shows the topology that we are using in this example.

Figure 1: Topology of BGP Labeled Unicast Egress Peer Traffic Engineering on Ingress Using cRPD
Topology of BGP Labeled Unicast Egress
Peer Traffic Engineering on Ingress Using cRPD

Configuration

The section shows how to enable egress traffic engineering on the ASBR R0 and demonstrates ingress functionality at cRPD. Configuration of other routers are generic and are omitted to focus on the details relevant to this example. The example here is based on the following example that you can use as reference to configure the other routers in the network:

Example: Configuring Egress Peer Traffic Engineering Using BGP Labeled Unicast

Configure R0 (ASBR) to Facilitate Egress Traffic Engineering in the Network

Step-by-Step Procedure

  1. Configure a GRE tunnel on VMX R0 toward crpd01.
  2. Enable egress traffic engineering toward the external peers.
  3. Create policies that export the ARP routes that egress traffic engineering created and apply them to the IBGP core in the labeled unicast family.
  4. Re-advertise Internet routes from external peers with the nexthop unchanged.

Configure the cRPD (Ingress Node) to Control EPE Decisions in the Network

Step-by-Step Procedure

  1. Bring up the cRPD on the Linux VM in the host namespace.
    host@h0:~# docker images
    host@h0:~# docker volume create crpd01-config
    host@h0:~# docker volume create crpd01-varlog
    host@h0:~# docker run --rm --detach --name crpd01 -h crpd01 --privileged --net=host -v crpd01-config:/config -v crpd01-varlog:/var/log -it crpd:19.2R2.2
    host@h0:~# docker ps
  2. Configure an interface IP address for the H0-R1 link from the Linux shell.
    host@h0:~# ip addr add 10.20.21.1/30 dev ens3f1
    host@h0:~# ifconfig ens3f1 up
  3. Create loopback interfaces and IP addresses.
    host@h0:~# ip link add name lo1 type dummy

    host@h0:~# ip addr add 10.20.20.20/32 dev lo1

    host@h0:~# ifconfig lo1 up

    host@h0:~# ip link add name lo2 type dummy

    host@h0:~# ip addr add 172.168.8.1/32 dev lo2

    host@h0:~# ifconfig lo2 up

    host@h0:~# ip addr add 2001:db8::172:16:88:1/128 dev lo2

    host@h0:~# ifconfig lo2 up
  4. Create the GRE tunnel.
    host@h0:~# ip tunnel add gre1 mode gre remote 10.19.19.19 local 10.20.20.20 ttl 255


    host@h0:~# ip link set gre1 up
  5. Enter the cRPD Junos CLI and configure the protocols. These will bring up OSPF and BGP sessions at cRPD01. The routes installed on the cRPD will bring up the GRE tunnel in Linux.

Verification

Step-by-Step Procedure

  1. On crpd01, verify that the routing protocol sessions are Up.
    host@crpd01> show ospf neighbor
  2. On crpd01, verify that IPv4 routes for U1 are installed. You should see the BGP routes with all available nexthops: 192.168.0.1, 192.168.1.1, 192.168.2.1, and 192.168.3.1
    host@crpd01> show route 172.16.77.1/32
  3. On crpd01, verify that IPv6 routes for U1 are installed.
    host@crpd01> show route 2001:db8::172:16:77:1
  4. On crpd01, verify nexthop resolution for IPv4.
    host@crpd01> show route 172.16.77.1 extensive
  5. On crpd01, verify nexthop resolution for IPv6.
    host@crpd01> show route 2001:db8::172:16:77:1 extensive
  6. On H0, verify that IPv4 routes are installed in Linux FIB with MPLS encapsulation.
    host@h0:~# ip route | grep 172.16.77.1
  7. On H0, verify that IPv6 routes are installed in Linux FIB accordingly, with MPLS encapsulation.
    host@h0:~# ip -6 route | grep 172:16:77:1
  8. Run ping from R0 to R6 on IPv4 and IPv6.
    host@h0:~# ping 172.16.77.1 -I 172.16.88.1 -f
    host@h0:~# ping 2001:db8::172:16:77:1 -I 2001:db8::172:16:88:1 -f
  9. Keep the ping running and monitor interface statistics at R3. Verify that traffic is exiting toward Peer1.
    host@10.53.33.247> monitor interface ge-0/0/0.0
    host@10.53.33.247> monitor interface ge-0/0/0.2
  10. Add the following configuration to install a static route at R0 for R6/32 destination, with nexthop of Peer2, with the resolve option. This configuration simulates a Controller API installed route to move the traffic to Peer2.
  11. On H0, observe routes in Linux FIB changes to encapsulate toward the new nexthop.
    host@h0:~# ip route | grep 172.16.77.1
  12. Run ping from R0 to R6. Traffic is steered toward Peer2, as directed by the controller installed static route.
    host@10.53.33.247> monitor interface ge-0/0/0.0
    host@10.53.33.247> monitor interface ge-0/0/0.2