Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring the Infrastructure as a Service Solution

 

This example describes how to build, configure, and verify an IaaS network containing a BGP-based IP fabric underlay, supported by a Contrail overlay, a Layer 3 gateway, OVSDB, Contrail high availability, and Contrail vRouter components.

Requirements

Table 1 lists the hardware and software components used in this example.

Table 1: Solution Hardware and Software Requirements

Device

Hardware

Software

Spine Devices

MX960

Junos OS Release 14.2R4.9

Leaf Devices

QFX5100-48S, QFX5100-48T, and QFX5100-24Q

Junos OS Release 14.1X53-D30.3

Servers

IBM Flex and IBMx3750

VMware ESXi 5.1

VMware vCenter 5.1

Super Micro

ubuntu-14.04.2-server-amd64.iso

Contrail Cloud

Version 2.22.1-5

Overview and Topology

The topology used in this example consists of a series of QFX5100 switches and two MX960 routers, as shown in Figure 1.

Figure 1: Infrastructure as a Service Solution Topology
Infrastructure as a Service Solution
Topology

In this example, the spine layer has two MX960 routers. The leaf layer uses six leaf devices made with a combination of QFX5100-48S, QFX5100-48T, and QFX5100-24Q switches. Leaf 0 and Leaf 1 are Virtual Chassis containing two switches apiece. The remaining leaves (2 through 5) are standalone switches.

Three control nodes (CNs) and three TOR services nodes (TSNs) are attached to Leaf 5. The three control nodes (Host 1-CN1, Host 2-CN2, and Host 3-CN3) run on a Super Micro server, as does Host 6-TSN3. Host 4-TSN1 and Host 5-TSN2 run on an IBM Flex Blade server. Hosts 7 and 8 are virtual machine compute nodes hosted by a pair of Super Micro servers that are attached to Leaf 0 and Leaf 1, respectively. Ten hosts connect through Leaf 0, and another ten hosts connect through Leaf 4. You can manage all devices using the 10.94.0.0 network.

Table 2 lists the IP addressing used in this example, Table 3 displays the aggregated Ethernet interfaces used between spine and leaf devices, Table 4 shows the IP addresses for the Contrail nodes, and Table 5 lists the loopback addresses and autonomous system numbers for the spine and leaf devices.

Table 2: IPv4 Addressing

IPv4 Network Prefixes

Network

Spine to leaf point-to-point links

192.168.0.0/24

Contrail nodes

172.16.15.0/24

Loopback IPs (for all devices)

10.20.20.0/24

vRouter + Hypervisor 1

10.35.35.0/24

vRouter + Hypervisor 2

10.36.36.0/24

Table 3: Aggregated Ethernet Interfaces between Spine and Leaf Devices

 

Spine 1

Spine 2

Leaf 0

ae10

ae0

Leaf 1

ae11

ae1

Leaf 2

ae12

ae2

Leaf 3

ae13

ae3

Leaf 4

ae14

ae4

Leaf 5

ae15

ae5

Table 4: Contrail Node IP Addresses

Contrail Node

Management Address

IP Address

Host1-CN1 (control node)

10.94.191.150

172.16.15.3

Host2-CN2 (control node)

10.94.191.151

172.16.15.4

Host3-CN3 (control node)

10.94.191.152

172.16.15.5

Virtual IP Address

10.94.191.153

172.16.15.100

(Internal virtual IP address)

Host4-TSN1

(TOR services node)

10.94.63.102

(VM on 10.94.47.150)

172.16.15.6

Host5-TSN2

(TOR services node)

10.94.63.103

(VM on 10.94.47.150)

172.16.15.7

Host6-TSN3

(TOR services node)

10.94.47.119

172.16.15.8

Host 7 (compute node)

10.94.191.156

10.35.35.2

Host 8 (compute node)

10.94.191.157

10.36.36.2

Table 5: Loopback Addresses and ASNs for Spine Devices, Leaf Devices, and Contrail Controllers

 

Loopback Address

ASN

Leaf 0

10.20.20.3

65200

Leaf 1

10.20.20.4

65201

Leaf 2

10.20.20.5

65202

Leaf 3

10.20.20.6

65203

Leaf 4

10.20.20.7

65204

Leaf 5

10.20.20.8

65205

Spine 1

10.20.20.10

64512

Spine 2

10.20.20.2

64512

Contrail Controllers

(Host1-CN1, Host2-CN2, Host3-CN3)

 

64512

Configuring the Underlay for the IaaS Solution

This section explains how to build out the leaf and spine layers of an IP fabric as the underlay for the IaaS solution. It includes the following sections:

Configuring Leaf Devices for the Underlay

CLI Quick Configuration

To quickly configure the leaf devices, enter the following representative configuration statements on each device:

Note

The configuration shown here applies to Virtual Chassis Leaf 1.

[edit]
set chassis aggregated-devices ethernet device-count 18
set interfaces xe-0/0/23:2 ether-options 802.3ad ae1
set interfaces xe-0/0/23:3 ether-options 802.3ad ae1
set interfaces ae1 mtu 9192
set interfaces ae1 aggregated-ether-options lacp active
set interfaces ae1 unit 0 family inet address 192.168.0.15/31
set interfaces xe-1/0/46 ether-options 802.3ad ae11
set interfaces xe-1/0/47 ether-options 802.3ad ae11
set interfaces ae11 mtu 9192
set interfaces ae11 aggregated-ether-options lacp active
set interfaces ae11 unit 0 family inet address 192.168.0.27/31
set interfaces lo0 unit 0 family inet address 10.20.20.4/32
set interfaces ge-1/0/0 unit 0 family inet address 10.36.36.1/24
set routing-options autonomous-system 65201
set routing-options forwarding-table export PFE-LB
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface ae1.0
set protocols ospf area 0.0.0.0 interface ae11.0
set protocols bgp log-updown
set protocols bgp import bgp-clos-in
set protocols bgp export bgp-clos-out
set protocols bgp group CLOS type external
set protocols bgp group CLOS preference 8
set protocols bgp group CLOS mtu-discovery
set protocols bgp group CLOS bfd-liveness-detection minimum-interval 350
set protocols bgp group CLOS bfd-liveness-detection multiplier 3
set protocols bgp group CLOS bfd-liveness-detection session-mode single-hop
set protocols bgp group CLOS multipath multiple-as
set protocols bgp group CLOS neighbor 192.168.0.14 peer-as 64512
set protocols bgp group CLOS neighbor 192.168.0.26 peer-as 64512
set policy-options policy-statement PFE-LB then load-balance per-packet
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.20.20.0/24 orlonger
set policy-options policy-statement bgp-clos-in term loopbacks then accept
set policy-options policy-statement bgp-clos-out term loopback from protocol direct
set policy-options policy-statement bgp-clos-out term loopback from route-filter 10.20.20.0/24 orlonger
set policy-options policy-statement bgp-clos-out term loopback then next-hop self
set policy-options policy-statement bgp-clos-out term loopback then accept
set policy-options policy-statement bgp-clos-out term comp-2 from protocol direct
set policy-options policy-statement bgp-clos-out term comp-2 from route-filter 10.36.36.0/24 orlonger
set policy-options policy-statement bgp-clos-out term comp-2 then next-hop self
set policy-options policy-statement bgp-clos-out term comp-2 then accept
set groups global chassis redundancy graceful-switchover
set groups global routing-options nonstop-routing
set groups global protocols layer2-control nonstop-bridging
set apply-groups global

Step-by-Step Procedure

To configure the leaf devices:

  1. Configure aggregated Ethernet interfaces to reach the spine devices and add xe- interface member links in each bundle for redundancy:
    [edit]
    user@leaf-1# set chassis aggregated-devices ethernet device-count 18
    user@leaf-1# set interfaces xe-0/0/23:2 ether-options 802.3ad ae1
    user@leaf-1# set interfaces xe-0/0/23:3 ether-options 802.3ad ae1
    user@leaf-1# set interfaces ae1 mtu 9192
    user@leaf-1# set interfaces ae1 aggregated-ether-options lacp active
    user@leaf-1# set interfaces ae1 unit 0 family inet address 192.168.0.15/31 ## To Spine 2
    user@leaf-1# set interfaces xe-1/0/46 ether-options 802.3ad ae11
    user@leaf-1# set interfaces xe-1/0/47 ether-options 802.3ad ae11
    user@leaf-1# set interfaces ae11 mtu 9192
    user@leaf-1# set interfaces ae11 aggregated-ether-options lacp active
    user@leaf-1# set interfaces ae11 unit 0 family inet address 192.168.0.27/31 ## To Spine 1
  2. Configure a loopback address and a physical interface to reach the compute node, vRouter, and hypervisor:
    [edit]
    user@leaf-1# set interfaces lo0 unit 0 family inet address 10.20.20.4/32
    user@leaf-1# set interfaces ge-1/0/0 unit 0 family inet address 10.36.36.1/24 ## Compute node (vRouter + hypervisor)
  3. Configure OSPF to enable IGP connectivity to the spine devices:
    [edit]
    user@leaf-1# set protocols ospf area 0.0.0.0 interface lo0.0 passive
    user@leaf-1# set protocols ospf area 0.0.0.0 interface ae1.0
    user@leaf-1# set protocols ospf area 0.0.0.0 interface ae11.0
  4. Configure EBGP sessions with each spine device to create the Clos-based IP fabric:
    [edit]
    user@leaf-1# set routing-options autonomous-system 65201
    user@leaf-1# set protocols bgp log-updown
    user@leaf-1# set protocols bgp import bgp-clos-in
    user@leaf-1# set protocols bgp export bgp-clos-out
    user@leaf-1# set protocols bgp group CLOS type external
    user@leaf-1# set protocols bgp group CLOS preference 8 ## Lower the default preference to prefer BGP routes over OSPF routes
    user@leaf-1# set protocols bgp group CLOS mtu-discovery
    user@leaf-1# set protocols bgp group CLOS bfd-liveness-detection minimum-interval 350
    user@leaf-1# set protocols bgp group CLOS bfd-liveness-detection multiplier 3
    user@leaf-1# set protocols bgp group CLOS bfd-liveness-detection session-mode single-hop
    user@leaf-1# set protocols bgp group CLOS multipath multiple-as
    user@leaf-1# set protocols bgp group CLOS neighbor 192.168.0.14 peer-as 64512 ## Connect to Spine 2
    user@leaf-1# set protocols bgp group CLOS neighbor 192.168.0.26 peer-as 64512 ## Connect to Spine 1
  5. Configure routing policy elements to enable equal-cost multipath (ECMP) load balancing and reachability to the loopback interfaces of the spine devices:
    [edit]
    user@leaf-1# set routing-options forwarding-table export PFE-LB
    user@leaf-1# set policy-options policy-statement PFE-LB then load-balance per-packet ## Configure ECMP
    user@leaf-1# set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.20.20.0/24 orlonger
    user@leaf-1# set policy-options policy-statement bgp-clos-in term loopbacks then accept
    user@leaf-1# set policy-options policy-statement bgp-clos-out term loopback from protocol direct
    user@leaf-1# set policy-options policy-statement bgp-clos-out term loopback from route-filter 10.20.20.0/24 orlonger
    user@leaf-1# set policy-options policy-statement bgp-clos-out term loopback then next-hop self
    user@leaf-1# set policy-options policy-statement bgp-clos-out term loopback then accept
    user@leaf-1# set policy-options policy-statement bgp-clos-out term comp-2 from protocol direct
    user@leaf-1# set policy-options policy-statement bgp-clos-out term comp-2 from route-filter 10.36.36.0/24 orlonger ## Local compute node Server ( vRouter + hypervisor)
    user@leaf-1# set policy-options policy-statement bgp-clos-out term comp-2 then next-hop self
    user@leaf-1# set policy-options policy-statement bgp-clos-out term comp-2 then accept
  6. Configure graceful Routing Engine switchover, nonstop routing, and nonstop bridging globally:
    [edit]
    user@leaf-1# set groups global chassis redundancy graceful-switchover
    user@leaf-1# set groups global routing-options nonstop-routing
    user@leaf-1# set groups global protocols layer2-control nonstop-bridging
    user@leaf-1# set apply-groups global

Configuring Spine Devices for the Underlay

CLI Quick Configuration

To quickly configure the spine devices, enter the following representative configuration statements on each device:

Note

The configuration shown here applies to Spine 2.

[edit]
set chassis aggregated-devices ethernet device-count 30
set interfaces xe-0/0/0 gigether-options 802.3ad ae0
set interfaces xe-1/0/0 gigether-options 802.3ad ae0
set interfaces ae0 mtu 9192
set interfaces ae0 aggregated-ether-options lacp active
set interfaces ae0 unit 0 family inet address 192.168.0.12/31
set interfaces xe-0/0/2 gigether-options 802.3ad ae1
set interfaces xe-1/0/2 gigether-options 802.3ad ae1
set interfaces ae1 mtu 9192
set interfaces ae1 aggregated-ether-options lacp active
set interfaces ae1 unit 0 family inet address 192.168.0.14/31
set interfaces xe-0/0/4 gigether-options 802.3ad ae2
set interfaces xe-1/0/4 gigether-options 802.3ad ae2
set interfaces ae2 mtu 9192
set interfaces ae2 aggregated-ether-options lacp active
set interfaces ae2 unit 0 family inet address 192.168.0.16/31
set interfaces xe-0/0/6 gigether-options 802.3ad ae3
set interfaces xe-1/0/6 gigether-options 802.3ad ae3
set interfaces ae3 mtu 9192
set interfaces ae3 aggregated-ether-options lacp active
set interfaces ae3 unit 0 family inet address 192.168.0.18/31
set interfaces xe-0/1/0 gigether-options 802.3ad ae4
set interfaces xe-1/1/0 gigether-options 802.3ad ae4
set interfaces ae4 mtu 9192
set interfaces ae4 aggregated-ether-options lacp active
set interfaces ae4 unit 0 family inet address 192.168.0.20/31
set interfaces xe-0/1/2 gigether-options 802.3ad ae5
set interfaces xe-1/1/2 gigether-options 802.3ad ae5
set interfaces ae5 mtu 9192
set interfaces ae5 aggregated-ether-options lacp active
set interfaces ae5 unit 0 family inet address 192.168.0.22/31
set interfaces lo0 unit 0 family inet address 10.20.20.2/32
set routing-options autonomous-system 64512
set routing-options forwarding-table export PFE-LB
set protocols ospf area 0.0.0.0 interface lo0.0 passive
set protocols ospf area 0.0.0.0 interface ae0.0
set protocols ospf area 0.0.0.0 interface ae1.0
set protocols ospf area 0.0.0.0 interface ae2.0
set protocols ospf area 0.0.0.0 interface ae3.0
set protocols ospf area 0.0.0.0 interface ae4.0
set protocols ospf area 0.0.0.0 interface ae5.0
set protocols bgp log-updown
set protocols bgp import bgp-clos-in
set protocols bgp export bgp-clos-out
set protocols bgp graceful-restart
set protocols bgp group CLOS type external
set protocols bgp group CLOS multihop
set protocols bgp group CLOS preference 8
set protocols bgp group CLOS mtu-discovery
set protocols bgp group CLOS bfd-liveness-detection minimum-interval 350
set protocols bgp group CLOS bfd-liveness-detection multiplier 3
set protocols bgp group CLOS bfd-liveness-detection session-mode single-hop
set protocols bgp group CLOS multipath multiple-as
set protocols bgp group CLOS neighbor 192.168.0.13 peer-as 65200
set protocols bgp group CLOS neighbor 192.168.0.15 peer-as 65201
set protocols bgp group CLOS neighbor 192.168.0.17 peer-as 65202
set protocols bgp group CLOS neighbor 192.168.0.19 peer-as 65203
set protocols bgp group CLOS neighbor 192.168.0.21 peer-as 65204
set protocols bgp group CLOS neighbor 192.168.0.23 peer-as 65205
set protocols bgp group contrail type internal
set protocols bgp group contrail multihop
set protocols bgp group contrail local-address 10.20.20.2
set protocols bgp group contrail family inet-vpn any
set protocols bgp group contrail family evpn signaling
set protocols bgp group contrail family route-target
set protocols bgp group contrail neighbor 10.20.20.10 ## To Spine 1
set protocols bgp group contrail neighbor 172.16.15.3
set protocols bgp group contrail neighbor 172.16.15.4
set protocols bgp group contrail neighbor 172.16.15.5
set policy-options policy-statement PFE-LB then load-balance per-packet
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.20.20.0/24 orlonger
set policy-options policy-statement bgp-clos-in term loopbacks then accept
set policy-options policy-statement bgp-clos-out term loopback from protocol direct
set policy-options policy-statement bgp-clos-out term loopback from route-filter 10.20.20.0/24 orlonger
set policy-options policy-statement bgp-clos-out term loopback then next-hop self
set policy-options policy-statement bgp-clos-out term loopback then accept
set policy-options policy-statement bgp-clos-out term direct_route from protocol direct
set policy-options policy-statement bgp-clos-out term direct_route from route-filter 192.168.0.0/24 orlonger
set policy-options policy-statement bgp-clos-out term direct_route then next-hop self
set policy-options policy-statement bgp-clos-out term direct_route then accept
set chassis redundancy graceful-switchover
set protocols bgp graceful-restart
set routing-options nonstop-routing

Step-by-Step Procedure

To configure interfaces on the spine devices:

  1. Configure aggregated Ethernet interfaces to connect to the leaf devices and add xe- interface member links in each bundle for redundancy:
    [edit]
    user@spine-2# set chassis aggregated-devices ethernet device-count 30
    user@spine-2# set interfaces xe-0/0/0 gigether-options 802.3ad ae0
    user@spine-2# set interfaces xe-1/0/0 gigether-options 802.3ad ae0
    user@spine-2# set interfaces ae0 mtu 9192
    user@spine-2# set interfaces ae0 aggregated-ether-options lacp active
    user@spine-2# set interfaces ae0 unit 0 family inet address 192.168.0.12/31
    user@spine-2# set interfaces xe-0/0/2 gigether-options 802.3ad ae1
    user@spine-2# set interfaces xe-1/0/2 gigether-options 802.3ad ae1
    user@spine-2# set interfaces ae1 mtu 9192
    user@spine-2# set interfaces ae1 aggregated-ether-options lacp active
    user@spine-2# set interfaces ae1 unit 0 family inet address 192.168.0.14/31
    user@spine-2# set interfaces xe-0/0/4 gigether-options 802.3ad ae2
    user@spine-2# set interfaces xe-1/0/4 gigether-options 802.3ad ae2
    user@spine-2# set interfaces ae2 mtu 9192
    user@spine-2# set interfaces ae2 aggregated-ether-options lacp active
    user@spine-2# set interfaces ae2 unit 0 family inet address 192.168.0.16/31
    user@spine-2# set interfaces xe-0/0/6 gigether-options 802.3ad ae3
    user@spine-2# set interfaces xe-1/0/6 gigether-options 802.3ad ae3
    user@spine-2# set interfaces ae3 mtu 9192
    user@spine-2# set interfaces ae3 aggregated-ether-options lacp active
    user@spine-2# set interfaces ae3 unit 0 family inet address 192.168.0.18/31
    user@spine-2# set interfaces xe-0/1/0 gigether-options 802.3ad ae4
    user@spine-2# set interfaces xe-1/1/0 gigether-options 802.3ad ae4
    user@spine-2# set interfaces ae4 mtu 9192
    user@spine-2# set interfaces ae4 aggregated-ether-options lacp active
    user@spine-2# set interfaces ae4 unit 0 family inet address 192.168.0.20/31
    user@spine-2# set interfaces xe-0/1/2 gigether-options 802.3ad ae5
    user@spine-2# set interfaces xe-1/1/2 gigether-options 802.3ad ae5
    user@spine-2# set interfaces ae5 mtu 9192
    user@spine-2# set interfaces ae5 aggregated-ether-options lacp active
    user@spine-2# set interfaces ae5 unit 0 family inet address 192.168.0.22/31
    user@spine-2# set interfaces lo0 unit 0 family inet address 10.20.20.2/32
  2. Configure OSPF to enable IGP connectivity to the leaf devices:
    [edit]
    user@spine-2# set protocols ospf area 0.0.0.0 interface lo0.0 passive
    user@spine-2# set protocols ospf area 0.0.0.0 interface ae0.0
    user@spine-2# set protocols ospf area 0.0.0.0 interface ae1.0
    user@spine-2# set protocols ospf area 0.0.0.0 interface ae2.0
    user@spine-2# set protocols ospf area 0.0.0.0 interface ae3.0
    user@spine-2# set protocols ospf area 0.0.0.0 interface ae4.0
    user@spine-2# set protocols ospf area 0.0.0.0 interface ae5.0
  3. Configure two BGP sessions—one EBGP session with each leaf device to create the Clos-based IP fabric underlay, and one IBGP session with Spine 1 and the Contrail control nodes to create the overlay network:
    [edit]
    user@spine-2# set routing-options autonomous-system 64512
    user@spine-2# set protocols bgp log-updown
    user@spine-2# set protocols bgp import bgp-clos-in
    user@spine-2# set protocols bgp export bgp-clos-out
    user@spine-2# set protocols bgp graceful-restart
    user@spine-2# set protocols bgp group CLOS type external
    user@spine-2# set protocols bgp group CLOS multihop
    user@spine-2# set protocols bgp group CLOS preference 8
    user@spine-2# set protocols bgp group CLOS mtu-discovery
    user@spine-2# set protocols bgp group CLOS bfd-liveness-detection minimum-interval 350
    user@spine-2# set protocols bgp group CLOS bfd-liveness-detection multiplier 3
    user@spine-2# set protocols bgp group CLOS bfd-liveness-detection session-mode single-hop
    user@spine-2# set protocols bgp group CLOS multipath multiple-as
    user@spine-2# set protocols bgp group CLOS neighbor 192.168.0.13 peer-as 65200
    user@spine-2# set protocols bgp group CLOS neighbor 192.168.0.15 peer-as 65201
    user@spine-2# set protocols bgp group CLOS neighbor 192.168.0.17 peer-as 65202
    user@spine-2# set protocols bgp group CLOS neighbor 192.168.0.19 peer-as 65203
    user@spine-2# set protocols bgp group CLOS neighbor 192.168.0.21 peer-as 65204
    user@spine-2# set protocols bgp group CLOS neighbor 192.168.0.23 peer-as 65205
    user@spine-2# set protocols bgp group contrail type internal
    user@spine-2# set protocols bgp group contrail multihop
    user@spine-2# set protocols bgp group contrail local-address 10.20.20.2
    user@spine-2# set protocols bgp group contrail family inet-vpn any
    user@spine-2# set protocols bgp group contrail family evpn signaling
    user@spine-2# set protocols bgp group contrail family route-target
    user@spine-2# set protocols bgp group contrail neighbor 10.20.20.10 ## To Spine 1
    user@spine-2# set protocols bgp group contrail neighbor 172.16.15.3
    user@spine-2# set protocols bgp group contrail neighbor 172.16.15.4
    user@spine-2# set protocols bgp group contrail neighbor 172.16.15.5
  4. Configure routing policy elements to enable equal-cost multipath (ECMP) load balancing and reachability to the loopback interfaces of the leaf devices:
    [edit]
    user@spine-2# set routing-options forwarding-table export PFE-LB
    user@spine-2# set policy-options policy-statement PFE-LB then load-balance per-packet
    user@spine-2# set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.20.20.0/24 orlonger
    user@spine-2# set policy-options policy-statement bgp-clos-in term loopbacks then accept
    user@spine-2# set policy-options policy-statement bgp-clos-out term loopback from protocol direct
    user@spine-2# set policy-options policy-statement bgp-clos-out term loopback from route-filter 10.20.20.0/24 orlonger
    user@spine-2# set policy-options policy-statement bgp-clos-out term loopback then next-hop self
    user@spine-2# set policy-options policy-statement bgp-clos-out term loopback then accept
    user@spine-2# set policy-options policy-statement bgp-clos-out term direct_route from protocol direct
    user@spine-2# set policy-options policy-statement bgp-clos-out term direct_route from route-filter 192.168.0.0/24 orlonger
    user@spine-2# set policy-options policy-statement bgp-clos-out term direct_route then next-hop self
    user@spine-2# set policy-options policy-statement bgp-clos-out term direct_route then accept
  5. Configure graceful Routing Engine switchover, nonstop routing, and graceful restart:
    [edit]
    user@spine-2# set chassis redundancy graceful-switchover
    user@spine-2# set protocols bgp graceful-restart
    user@spine-2# set routing-options nonstop-routing

Configuring the Contrail and OVSDB Overlay for the IaaS Solution

In this section, configure OVSDB and Contrail on the TOR switches to create the overlay for the IaaS solution.

Note
  • Before starting this section, you must have Contrail installed, and all the Contrail control nodes and TSNs must have full connectivity to each other.

  • Because this guide focuses on the highlights of the configuration steps, it does not provide a thorough explanation of Contrail configuration.

  • Detailed installation guidelines for Contrail, adding or removing top-of-rack switches or vRouters by using fab commands, enabling Contrail high availability, and debugging is beyond the scope of this document. For more information on these topics, see: Using TOR Switches and OVSDB to Extend the Contrail Cluster to Other Instances, Baremetal Support, and Juniper OpenStack High Availability

Configuring OVSDB

Step-by-Step Procedure

Configure OVSDB on the top-of-rack switches.

  1. Configure hosts (control nodes, TSNs, and compute nodes) with parameters that allow them to interoperate with all components in the Contrail overlay network. A common way to handle this step is to create Python scripts. The following excerpt from a testbed.py configuration file includes annotated Python script snippets that illustrate some of the common configuration tasks required for Hosts 1 through 8 to reach Leaf 1 and each other.Note

    The testbed.py configuration file can be found in the following directory on a control node (such as Host1-CN1): /opt/contrail/utils/fabfile/testbeds/testbed.py

    ## testbed.py Python script excerpts

  2. Configure OVSDB and the VXLAN tunnel endpoint (VTEP) configuration on the top-of-rack switches. After Contrail is up and all the nodes and top-of-rack switches are reachable through their loopback addresses, add the OVSDB and VTEP configuration in the switches.Note

    The following configuration applies to Leaf 1, so extend this configuration model to the other leaf devices as well.

    Configure OVSDB Interfaces

    [edit]
    user@leaf-1# set protocols ovsdb interfaces xe-0/0/22:3

    Configure the Contrail Controller IP Address (Internal Virtual IP Address)

    [edit]
    user@leaf-1# set protocols ovsdb controller 172.16.15.100 protocol ssl port 6646
    user@leaf-1# set protocols ovsdb controller 172.16.15.100 inactivity-probe-duration 20000
    user@leaf-1# set switch-options ovsdb-managed

    Configure the VXLAN Tunnel Endpoint (VTEP) Interface

    [edit]
    user@leaf-1# set switch-options vtep-source-interface lo0.0

Configuring Contrail

GUI Step-by-Step Procedure

Contrail enables you to configure virtual networks and connect the physical top-of-rack switches to these networks. Table 6 shows the virtual network addresses and identifiers for Leaf 0 and Leaf 4 used in this example.

Table 6: Virtual Network Details for Leaf 0 and Leaf 4

Leaf 0

Leaf 4

VTEP address (loopback)

10.20.20.3

10.20.20.7

VNI

10000

10000

VLAN ID

500

500

Hosts (MAC address)

*03:07*

*05:03*

Note

More information on Contrail can be found in the Contrail Feature Guide.

To establish virtual networks and connect the physical devices to these networks by using Contrail:

  1. Log in to Contrail. From the main Contrail dashboard, select Configure > Networking > Networks, then select Create Networks Enter a network name, add network details (such as subnets, host routes, and so on), and click the Save button.

  2. Review the network properties you have configured. Select Configure > Networking > Networks and then select the pull-down arrow next to the network you want to display.

  3. Add physical interfaces into your virtual network. Select Configure > Physical Devices > Interfaces, find the physical device that contains the interfaces you wish to add, fill out the Add Interface dialog box with the interface name and physical device name, and click the Save button.

  4. Repeat the interface configuration process to add logical interface information. Fill out the Add Interface dialog box a second time, but select Logical for the Type field, add the logical interface properties, and click the Save button.

  5. Verify your interface configuration by selecting Configure > Physical Devices > Interfaces and using the pull-down arrow next to the network you want to display.

  6. Select Configure > Physical Devices > Physical Routers to create virtual networks and attach them to the top-of-rack switch interfaces (the same ones configured as OVSDB interfaces on the leaf devices).

  7. Using the configuration steps shown in Configuring OVSDB and Configuring Contrail, use similar settings to the ones you used with Leaf 1 to create a virtual network on Leaf 0 and Leaf 4. Using a traffic generator, create 10 hosts on each leaf device, and send bidirectional traffic between the hosts.

Configuring the Layer 3 Gateway for the IaaS Solution

Step-by-Step Procedure

A Layer 3 gateway in a virtualized network enables network traffic to travel between the virtual and the physical networks, or merely between the virtual networks. In many cases, the virtual network is created using overlay technologies such as tunneling. As a result, you must configure the Layer 3 gateway to communicate with the overlay network to permit traffic to pass back and forth. For this solution, Juniper Networks MX Series routers act as the Layer 3 gateway for inter-virtual network (inter-VN) traffic.

To establish a Layer 3 gateway:

  1. Configure Layer 3 routing to carry inter-VN traffic. Table 7 shows the addresses and identifiers required to route traffic between the virtual networks in this solution.

    Table 7: Inter-VN Routing Details

    Leaf 0

    Leaf 4

    VTEP address (loopback)

    10.20.20.3

    10.20.20.7

    VNI

    (Note: the VNI numbers are different)

    10000

    20000

    Virtual Network

    10.1.1.0/24

    10.2.2.0/24

    VLAN ID

    500

    501

    Hosts (MAC Addresses)

    *03:07*

    *05:03*

    Hosts (IP Addresses)

    10.1.1.5-10.1.1.14

    10.2.2.100-10.2.2.109

    Virtual Gateway

    10.1.1.1

    10.2.2.2

  2. Configure IBGP peering on the spine devices. IBGP acts as an overlay between the spine devices and the Contrail nodes.

    Table 8 shows where you need to establish IBGP peering between the Contrail control nodes and the MX Series routers acting as spine devices.

    Table 8: IBGP Peering Between Contrail and the MX Series Routers

    Spine 1

    Spine 2

    Host1-CN1 (control node)

    IBGP

    IBGP

    Host2-CN2 (control node)

    IBGP

    IBGP

    Host3-CN3 (control node)

    IBGP

    IBGP

    Spine 1

    -

    IBGP

    Spine 2

    IBGP

    -

    Route target

    Note: Add this route target when creating virtual networks with the Contrail WebGUI.

    65200:12345

    65204:12346

    Figure 2 shows the virtual routing and forwarding (VRF) instances used by the spine devices to enable inter-VN routing between the virtual networks.

    Figure 2: Inter-Virtual Network Routing
    Inter-Virtual Network Routing
    Note

    The following configuration applies to Spine 1, so remember to configure Spine 2 as well.

    Configure IBGP Peering on the Spine Devices

    [edit]
    user@spine-1# set protocols bgp group contrail type internal
    user@spine-1# set protocols bgp group contrail multihop
    user@spine-1# set protocols bgp group contrail local-address 10.20.20.10
    user@spine-1# set protocols bgp group contrail family inet-vpn any
    user@spine-1# set protocols bgp group contrail family evpn signaling
    user@spine-1# set protocols bgp group contrail family route-target
    user@spine-1# set protocols bgp group contrail neighbor 172.16.15.3 ## To Host1-CN1
    user@spine-1# set protocols bgp group contrail neighbor 10.20.20.2 ## To Spine 2
    user@spine-1# set protocols bgp group contrail neighbor 172.16.15.4 ## To Host2-CN2
    user@spine-1# set protocols bgp group contrail neighbor 172.16.15.5 ## To Host3-CN3
  3. In Contrail, select Configure > Infrastructure > BGP Routers to configure IBGP for the Contrail control nodes.
  4. After setting the IBGP configuration in Contrail, select Configure > Infrastructure > BGP Routers to view the configured devices.
  5. On the spine devices, configure routing instances, EVPN, policy options, and interfaces.
    1. Configure a Layer 3 Routing Instance for Virtual Network 1 on the Spine Devices

      Create a VRF routing instance for VN1 that includes an IRB interface, route distinguisher, import and export policies, route target, and auto export.

      [edit]
      user@spine-1# set routing-instances evpn-inet-test instance-type vrf
      user@spine-1# set routing-instances evpn-inet-test interface irb.500
      user@spine-1# set routing-instances evpn-inet-test route-distinguisher 10.20.20.3:3000
      user@spine-1# set routing-instances evpn-inet-test vrf-import import-int
      user@spine-1# set routing-instances evpn-inet-test vrf-export export-int
      user@spine-1# set routing-instances evpn-inet-test vrf-export export-int2
      user@spine-1# set routing-instances evpn-inet-test vrf-target target:65200:12345
      user@spine-1# set routing-instances evpn-inet-test vrf-table-label
      user@spine-1# set routing-instances evpn-inet-test routing-options auto-export family inet unicast
    2. Configure an EVPN Instance for Virtual Network 1 on the Spine Devices

      Create an EVPN instance for VN1 that includes a VTEP interface, route distinguisher, route target, VXLAN, VLAN, and IRB interface details.

      [edit]
      user@spine-1# set routing-instances evpn-vxlan-S1 vtep-source-interface lo0.0
      user@spine-1# set routing-instances evpn-vxlan-S1 instance-type virtual-switch
      user@spine-1# set routing-instances evpn-vxlan-S1 route-distinguisher 10.20.20.3:2222
      user@spine-1# set routing-instances evpn-vxlan-S1 vrf-target target:65200:12345
      user@spine-1# set routing-instances evpn-vxlan-S1 protocols evpn encapsulation vxlan
      user@spine-1# set routing-instances evpn-vxlan-S1 protocols evpn extended-vni-list all
      user@spine-1# set routing-instances evpn-vxlan-S1 bridge-domains BD-500 vlan-id 500
      user@spine-1# set routing-instances evpn-vxlan-S1 bridge-domains BD-500 interface xe-0/2/1.500
      user@spine-1# set routing-instances evpn-vxlan-S1 bridge-domains BD-500 routing-interface irb.500
      user@spine-1# set routing-instances evpn-vxlan-S1 bridge-domains BD-500 vxlan vni 10000
      user@spine-1# set routing-instances evpn-vxlan-S1 bridge-domains BD-500 vxlan ingress-node-replication
    3. Configure a Layer 3 Routing Instance for Virtual Network 2 on the Spine Devices

      Create a VRF routing instance for VN2 that includes an IRB interface, route distinguisher, import and export policies, route target, and auto export.

      [edit]
      user@spine-1# set routing-instances evpn-inet-test2 instance-type vrf
      user@spine-1# set routing-instances evpn-inet-test2 interface irb.501
      user@spine-1# set routing-instances evpn-inet-test2 route-distinguisher 10.20.20.7:3006
      user@spine-1# set routing-instances evpn-inet-test2 vrf-import import-int
      user@spine-1# set routing-instances evpn-inet-test2 vrf-export export-int2
      user@spine-1# set routing-instances evpn-inet-test2 vrf-export export-int
      user@spine-1# set routing-instances evpn-inet-test2 vrf-target target:65204:12346
      user@spine-1# set routing-instances evpn-inet-test2 vrf-table-label
      user@spine-1# set routing-instances evpn-inet-test2 routing-options auto-export family inet unicast
    4. Configure an EVPN Instance for Virtual Network 2 on the Spine Devices

      Create an EVPN instance for VN2 that includes a VTEP interface, route distinguisher, route target, VXLAN, VLAN, and IRB interface details.

      [edit]
      user@spine-1# set routing-instances evpn-vxlan-L2 vtep-source-interface lo0.0
      user@spine-1# set routing-instances evpn-vxlan-L2 instance-type virtual-switch
      user@spine-1# set routing-instances evpn-vxlan-L2 route-distinguisher 10.20.20.7:3333
      user@spine-1# set routing-instances evpn-vxlan-L2 vrf-target target:65204:12346
      user@spine-1# set routing-instances evpn-vxlan-L2 protocols evpn encapsulation vxlan
      user@spine-1# set routing-instances evpn-vxlan-L2 protocols evpn extended-vni-list all
      user@spine-1# set routing-instances evpn-vxlan-L2 bridge-domains BD-501 vlan-id 501
      user@spine-1# set routing-instances evpn-vxlan-L2 bridge-domains BD-501 interface xe-0/2/3.501
      user@spine-1# set routing-instances evpn-vxlan-L2 bridge-domains BD-501 routing-interface irb.501
      user@spine-1# set routing-instances evpn-vxlan-L2 bridge-domains BD-501 vxlan vni 20000
      user@spine-1# set routing-instances evpn-vxlan-L2 bridge-domains BD-501 vxlan ingress-node-replication
    5. Configure Interfaces on the Spine Devices

      Enable physical interfaces, IRB interfaces, and the loopback address of the spine devices to reach the two virtual networks.

      [edit]
      user@spine-1# set interfaces xe-0/2/1 flexible-vlan-tagging
      user@spine-1# set interfaces xe-0/2/1 encapsulation flexible-ethernet-services
      user@spine-1# set interfaces xe-0/2/1 unit 500 encapsulation vlan-bridge
      user@spine-1# set interfaces xe-0/2/1 unit 500 vlan-id 500
      user@spine-1# set interfaces irb unit 500 family inet address 10.1.1.2/24 virtual-gateway-address 10.1.1.1
      user@spine-1# set interfaces xe-0/2/3 flexible-vlan-tagging
      user@spine-1# set interfaces xe-0/2/3 encapsulation flexible-ethernet-services
      user@spine-1# set interfaces xe-0/2/3 unit 501 encapsulation vlan-bridge
      user@spine-1# set interfaces xe-0/2/3 unit 501 vlan-id 501
      user@spine-1# set interfaces irb unit 501 family inet address 10.2.2.2/24 virtual-gateway-address 10.2.2.1
      user@spine-1# set interfaces lo0 unit 0 family inet address 10.20.20.10/32
    6. Configure Policy Options on the Spine Devices

      Redirect IRB interface traffic into a community, give each community a route target, and allow route advertisements from both virtual networks to reach each other.

      [edit]
      user@spine-1# set policy-options policy-statement import-int term 1 from community inet1
      user@spine-1# set policy-options policy-statement import-int term 1 from community inet2
      user@spine-1# set policy-options policy-statement import-int term 1 then accept
      user@spine-1# set policy-options policy-statement export-int term 1 from interface irb.500
      user@spine-1# set policy-options policy-statement export-int term 1 then community add inet1
      user@spine-1# set policy-options policy-statement export-int term 1 then accept
      user@spine-1# set policy-options policy-statement export-int2 term 1 from interface irb.501
      user@spine-1# set policy-options policy-statement export-int2 term 1 then community add inet2
      user@spine-1# set policy-options policy-statement export-int2 term 1 then accept
      user@spine-1# set policy-options community inet1 members target:65200:12345
      user@spine-1# set policy-options community inet2 members target:65204:12346
    7. Configure Dynamic Tunnels for Layer 3 VPNs on the Spine Devices

      Enable Layer 3 VPNs to use dynamic tunnels.

      [edit]
      user@spine-1# set chassis fpc 0 pic 0 tunnel-services
      user@spine-1# set chassis fpc 0 pic 0 adaptive-services service-package layer-3
      user@spine-1# set chassis network-services enhanced-ip
      user@spine-1# set routing-options dynamic-tunnels evpn_test1 source-address 10.20.20.10
      user@spine-1# set routing-options dynamic-tunnels evpn_test1 gre
      user@spine-1# set routing-options dynamic-tunnels evpn_test1 destination-networks 172.16.15.0/24

Configuring Contrail High Availability for the IaaS Solution

Step-by-Step Procedure

To enable Contrail high availability, you need to configure an internal virtual IP address and an external virtual IP address. Table 9 shows the high availability options that were configured in the Contrail configuration file named testbed.py on Host1-CN1 (/opt/contrail/utils/fabfile/testbeds).

Table 9: Contrail - High Availability Options

Option

Description

Internal VIP address

The virtual IP address of the OpenStack high availability nodes in the control data network. In a single interface setup, the internal_vip will be in the management data control network.

External VIP address

The virtual IP address of the OpenStack high availability nodes in the management network. In a single interface setup, the external_vip is not required.

Note

For more information on Contrail High Availability, see: Contrail High Availability.

To enable Contrail high availability:

  1. Configure the Contrail control nodes. As shown in Figure 3, this solution uses three control nodes (Host1-CN1, Host2-CN2, and Host3-CN3) and three top-of-rack services nodes (TSNs) (Host4-TSN1, Host5-TSN2, and Host6-TSN3). All the Contrail nodes are connected through Leaf 5. VRRP runs internally between all three control nodes and the VRRP protocol elects a master node. The VRRP master owns the virtual IP address. If the master node fails, the virtual IP address moves to a new master node elected by VRRP.
    Figure 3: Contrail - Control Nodes and TSNs
    Contrail - Control Nodes and TSNs

    Table 10 shows the IP addresses for the Contrail control nodes used in this solution.

    Table 10: Contrail - Nodes and IP Addresses

    Node Name

    IP Address

    Host1-CN1

    172.16.15.3

    Host2-CN2

    172.16.15.4

    Host3-CN3

    172.16.15.5

    Host4-TSN1

    172.16.15.6

    Host5-TSN2

    172.16.15.7

    Host6-TSN3

    172.16.15.8

  2. Configure interfaces and policies on Leaf 5 to interconnect the Contrail nodes and the other leaf devices.

    Configure an IRB Interface on Leaf 5

    You must configure the IRB interface with the same network prefix as the Contrail nodes.

    [edit]
    user@leaf-5# set interfaces irb description "Gateway to the Contrail control nodes and TSNs"
    user@leaf-5# set interfaces irb unit 5 family inet address 172.16.15.254/24
    user@leaf-5# set vlans Internal_VIP l3-interface irb.5

    Configure Policy Options on Leaf 5

    You must configure policies so that the Contrail nodes can reach all leaf devices in the network.

    [edit]
    user@leaf-5# set policy-options policy-statement bgp-clos-out term contrail from protocol direct
    user@leaf-5# set policy-options policy-statement bgp-clos-out term contrail from route-filter 172.16.15.0/24 orlonger
    user@leaf-5# set policy-options policy-statement bgp-clos-out term contrail then next-hop self
    user@leaf-5# set policy-options policy-statement bgp-clos-out term contrail then accept
  3. Configure the virtual IP address of the OVSDB controller on the leaf switches. (Leaf 0 and Leaf 1 are displayed below.)Note

    You also configure the virtual IP address (172.16.15.100) in the testbed.py file on the hosts.

    [edit]
    user@leaf-0# set protocols ovsdb controller 172.16.15.100 protocol ssl port 6645
    [edit]
    user@leaf-1# set protocols ovsdb controller 172.16.15.100 protocol ssl port 6646
  4. Configure an external virtual IP address (10.94.191.153) in the testbed.py file on the hosts. Table 11 shows the external virtual IP address and control node management addresses used in this solution.

    Table 11: Contrail - Control Node Management IP Addresses and External Virtual IP Address

    Name

    IP Address

    Host1-CN1

    10.94.191.150

    Host2-CN2

    10.94.191.151

    Host3-CN3

    10.94.191.152

    External Virtual IP Address

    10.94.191.153

    Note
    • OpenStack high availability offers a user interface that provides anytime management access, even if a control node fails.

    • For information on TSN high availability, see: Baremetal Support.

Configuring the Contrail vRouter for the IaaS Solution

Step-by-Step Procedure

The Contrail vRouter operates in the virtualized environment shown in Figure 4.

Figure 4: Contrail - Virtual Network Connectivity for the Virtual Machines
Contrail - Virtual Network Connectivity
for the Virtual Machines

The Contrail solution uses an overlay for network virtualization. Contrail vRouter provides a data plane, which enables a virtual interface to be associated with a VRF routing instance. Packets are encapsulated in a VXLAN tunnel at the VTEP and sent between tenant virtual machines across the IP fabric. This virtualized routing system connects the compute servers that reside in different virtual networks. To enable the vRouter:

  1. Configure the vRouter by adding information about the new compute nodes into your existing testbed.py script file. In this solution, Host 7 (10.35.35.2) and Host 8 (10.36.36.2) entries appear in the testbed.py file. For detailed vRouter configuration instructions, see page 171 in the Contrail Getting Started Guide  .
  2. Configure interfaces on the leaf devices to reach the hosts, and add a policy to enable the Contrail control nodes, TSNs, and leaf devices in the network to reach the compute nodes. In this example, Host 7 connects to Leaf 0 and Host 8 connects to Leaf 1.

    Configure Leaf 0 to Connect with Compute Node Host 7

    [edit]
    user@leaf-0# set interfaces ge-1/0/0 unit 0 family inet address 10.35.35.1/24
    user@leaf-0# set policy-options policy-statement bgp-clos-out term comp-1 from route-filter 10.35.35.0/24 orlonger

    Configure Leaf 1 to Connect with Compute Node Host 8

    [edit]
    user@leaf-1# set interfaces ge-1/0/0 unit 0 family inet address 10.36.36.1/24
    user@leaf-1# set policy-options policy-statement bgp-clos-out term comp-2 from route-filter 10.36.36.0/24 orlonger
  3. Verify connectivity to the compute nodes from the corresponding leaf devices.

    Verify Connectivity from Leaf 0 to Host 7

    user@leaf-0> show arp no-resolve | match 10.35.35

    Verify Connectivity from Leaf 1 to Host 8

    user@leaf-1> show arp no-resolve | match 10.36.36
  4. Verify the vRouter status in the compute nodes.

    Verify Contrail Status in Host 7

    root@host-7:~# contrail-status

    Verify Contrail Status in Host 8

    root@host-8:~# contrail-status
  5. Launch virtual machines by using the OpenStack dashboard. Select Project > Compute > Instances > Launch Instance > Details to configure the desired properties for your virtual machine. For this solution, create vm-10000 with an IP address of 10.1.1.3 and a VNI of 10000, and vm-20000 with an IP address of 10.2.2.3 and a VNI of 20000.Note

    vm-10000 is on compute node Host 7 (OpenStack-Network) and connects to Leaf 0, while vm-20000 is on Host 8 (OpenStack-Compute2) and connects to Leaf 1.

    For an administrator view of the virtual machines, select Admin > System Panel > Instances .

  6. To view the virtual machines you created, select Project > Compute > Instances .

Verification

Confirm that the IaaS IP fabric configuration is working properly.

Leaf: Verifying Interfaces

Purpose

Verify the state of key interfaces.

Action

Verify that the compute server-facing interface (ge-1/0/0) is up:

user@leaf-1> show interfaces terse | match 10.36.36

Verify that the loopback interface (lo0) is up:

user@leaf-1> show interfaces terse | match lo0

Meaning

The compute server-facing interface and loopback interface are functioning normally.

Leaf: Verifying IPv4 EBGP Sessions

Purpose

Verify the state of IPv4 EBGP sessions between the leaf and spine devices.

Action

Verify that IPv4 EBGP sessions are established with Spine 1 and Spine 2:

user@leaf-1> show bgp summary

Meaning

The EBGP sessions are established and functioning correctly.

Leaf: Verifying ECMP

Purpose

Verify that multiple equal-cost paths exist between the leaf devices.

Action

Verify that multiple paths to Leaf 0 are available:

user@leaf-1> show route forwarding-table destination 10.20.20.3

Verify that multiple paths to Leaf 2 are available:

user@leaf-1> show route forwarding-table destination 10.20.20.5

Verify that multiple paths to Leaf 3 are available:

user@leaf-1> show route forwarding-table destination 10.20.20.6

Verify that multiple paths to Leaf 4 are available:

user@leaf-1> show route forwarding-table destination 10.20.20.7

Verify that multiple paths to Leaf 5 are available:

user@leaf-1> show route forwarding-table destination 10.20.20.8

Meaning

The multiple equal-cost paths to the other leaf devices are available and functioning correctly.

Leaf: Verifying Spine and Leaf Reachability

Purpose

Verify that every leaf and spine device is reachable.

Action

Use the ping operation to verify reachability to Spine 1:

user@leaf-1> ping 10.20.20.10

Use the ping operation to verify reachability to Spine 2:

user@leaf-1> ping 10.20.20.2

Use the ping operation to verify reachability to Leaf 0:

user@leaf-1> ping 10.20.20.3

Use the ping operation to verify self reachability on Leaf 1:

user@leaf-1> ping 10.20.20.4

Use the ping operation to verify reachability to Leaf 2:

user@leaf-1> ping 10.20.20.5

Use the ping operation to verify reachability to Leaf 3:

user@leaf-1> ping 10.20.20.6

Use the ping operation to verify reachability to Leaf 4:

user@leaf-1> ping 10.20.20.7

Use the ping operation to verify reachability to Leaf 5:

user@leaf-1> ping 10.20.20.8

Meaning

Leaf 1 has reachability to all leaf and spine devices..

Leaf: Verifying Contrail Server Reachability

Purpose

Verify reachability to the Contrail servers acting as control nodes, TOR services nodes (TSNs), and compute nodes.

Action

Use the ping operation to verify reachability to Control Node 1:

user@leaf-1> ping 172.16.15.3

Use the ping operation to verify reachability to Control Node 2:

user@leaf-1> ping 172.16.15.4

Use the ping operation to verify reachability to Control Node 3:

user@leaf-1> ping 172.16.15.5

Use the ping operation to verify reachability to TSN 1:

user@leaf-1> ping 172.16.15.6

Use the ping operation to verify reachability to TSN 2:

user@leaf-1> ping 172.16.15.7

Use the ping operation to verify reachability to TSN 3:

user@leaf-1> ping 172.16.15.8

Use the ping operation to verify reachability to Compute Node 1:

user@leaf-1> ping 10.35.35.2

Use the ping operation to verify reachability to Compute Node 2:

user@leaf-1> ping 10.36.36.2

Meaning

Leaf 1 has reachability to all Contrail control nodes, TSNs, and compute nodes.

Leaf: Verifying BFD Sessions

Purpose

Verify the state of BFD for the IPv4 BGP connections between the leaf device and spine devices.

Action

Verify that the BFD sessions are up:

user@leaf-1> show bfd session

Meaning

The BFD sessions are established and functioning correctly.

Spine: Verifying IPv4 EBGP Sessions

Purpose

Verify the state of leaf-facing IPv4 EBGP sessions.

Action

Verify that IPv4 EBGP sessions are established:

user@spine-2> show bgp summary

Meaning

The EBGP sessions are established and functioning correctly.

Spine: Verifying BFD Sessions

Purpose

Verify the state of BFD for the BGP sessions between the leaf and spine devices.

Action

Verify that the BFD sessions are up:

user@spine-2> show bfd session

Meaning

The BFD sessions are established and functioning correctly.

Leaf: Verifying OVSDB Operation and Installation of the Contrail Virtual Network Configuration

Purpose

Verify that OVSDB has downloaded the Contrail virtual network configuration and installed it on the leaf devices.

Action

Verify that the OVSDB controller is operational:

user@leaf-1> show ovsdb controller

Verify that the Contrail-generated VLAN configuration is installed:

user@leaf-1> show configuration interfaces xe-0/0/22:3 | display set

Verify that the server-facing interface is up:

user@leaf-1> show interfaces terse xe-0/0/22:3

Learn the name of the Contrail instance discovered by OVSDB:

user@leaf-1> show ovsdb interface | match 500

Use the name of the Contrail instance to verify the status of the logical switch:

user@leaf-1> show ovsdb logical-switch Contrail-b1456b19-f183-4b9c-bb2a-95b6d75e532c

Verify the MAC addresses known by the logical switch:

user@leaf-1> show ovsdb mac logical-switch Contrail-b1456b19-f183-4b9c-bb2a-95b6d75e532c
Note

172.16.15.8 (Host6-TSN3) is the Contrail services node handling broadcast, unknown, and multicast (BUM) traffic. Currently the TOR agent for Leaf 1 resides on Host6-TSN3. If Host6-TSN3 fails, the TOR agent for Leaf 1 will move to one of the remaining TSNs (either TSN1 or TSN2). A high availability proxy on the control node determines which TOR agent should be paired with which TSN.

Meaning

OVSDB is functioning correctly and the Contrail network information is installed on the leaf device.

Leaf: Verifying VTEP Routes and Endpoints

Purpose

Verify that the correct VTEP routes and loopback addresses are reachable by the leaf devices.

Note

While reviewing this section, keep in mind the following IP addresses:

  • Leaf device loopback interface IP address range: 10.20.20.0/24

    TSN IP address range: 172.16.15.0/24

    Compute node attached to Leaf 0: 10.35.35.0/24

    Compute node attached to Leaf 1 (Local): 10.36.36.0/24

Action

Verify the VXLAN route table:

user@leaf-1> show route table :vxlan.inet.0

Verify the remote VXLAN tunnel endpoint (VTEP):

user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote

Verify all VTEP interfaces:

user@leaf-1> show interfaces vtep

Meaning

All VTEP routes and loopback addresses are reachable by the leaf devices.

Leaf: Verifying Contrail and OVSDB Configuration on Leaf 4

Purpose

Verify that Leaf 4 is configured properly.

Action

Verify Contrail and OVSDB configuration on Leaf 4.

Verify that the Contrail-generated VLAN configuration is installed:

user@leaf-4> show configuration interfaces xe-0/0/22:3

Verify that the Contrail instance is mapped to the VLAN and VNI:

user@leaf-4> show configuration | display set | match 10000

Verify that the Contrail instance is mapped to the OVSDB interface:

user@leaf-4> show ovsdb interface | match 500

Meaning

Contrail and OVSDB are functioning properly on Leaf 4.

Leaf: Verifying Contrail and OVSDB on Leaf Devices

Purpose

Verify that Layer 2 information is being shared between the virtual network and the leaf devices.

Action

Using a series of operational mode commands, verify the proper operation of the virtual network.

Verify the presence of the 10 hosts in the Ethernet switching table of Leaf 0:

user@leaf-0> show ethernet-switching table | match 03:07

Verify the OVSDB local MAC address table of Leaf 0:

user@leaf-0> show ovsdb mac local | match 03:07

Verify the OVSDB remote MAC address table of Leaf 0. Note the VTEP address indicates termination of the VXLAN tunnel on Leaf 4:

user@leaf-0> show ovsdb mac remote | match 05:03

Verify the presence of the 10 hosts in the Ethernet switching table of Leaf 4:

user@leaf-4> show ethernet-switching table | match 05:03

Verify the OVSDB local MAC address table of Leaf 4:

user@leaf-4> show ovsdb mac local | match 05:03

Verify the OVSDB remote MAC address table of Leaf 4. Note the VTEP address indicates termination of the VXLAN tunnel on Leaf 0:

user@leaf-4> show ovsdb mac remote | match 03:07 | except 07:02

Meaning

Layer 2 information is being shared between the leaf devices and the virtual network.

Spine: Verifying Routing Instances for the Layer 3 Gateway and Inter-VN Routing

Purpose

Verify that the EVPN and Layer 3 instances are operational.

Action

  1. Verify that the EVPN instance for Virtual Network 1 is working:

    user@spine-1> show evpn instance evpn-vxlan-S1 extensive | no-more
  2. Verify that the EVPN instance for Virtual Network 2 is working:

    user@spine-1> show evpn instance evpn-vxlan-L2 extensive | no-more
  3. Verify that the EVPN instance routing table contains the expected entries:

    user@spine-1> show route table bgp.evpn.0 | no-more
  4. Verify access to both Virtual Network 1 and Virtual Network 2 from VN1:

    user@spine-1> show route table evpn-inet-test.inet.0
  5. Verify access to both Virtual Network 1 and Virtual Network 2 from VN2:

    user@spine-1> show route table evpn-inet-test2.inet.0
  6. Verify the EVPN VXLAN routing table contains the expected entries:

    user@spine-1> show route table evpn-vxlan

Meaning

The EVPN routing instances, VXLAN routing tables, and BGP are functioning properly on Spine 1.

Leaf: Verifying the Layer 3 Gateway Path

Purpose

Verify that the Layer 3 gateway is functioning properly.

Action

Verify that the switching table on Leaf 0 maps to the VTEP interface (vtep.32775):

user@leaf-0> show ethernet-switching table | match 5e:00 | except et-0/0/1

Verify that the VTEP interface (vtep.32775) maps to the Layer 3 loopback address (10.20.20.2):

user@leaf-0> show interfaces vtep.32775

Verify that the OVSDB MAC addresses map to the Layer 3 loopback address (10.20.20.2):

user@leaf-0> show ovsdb mac | match 00:5e:00

Verify that the switching table on Leaf 4 maps to the VTEP interface (vtep.32771):

user@leaf-0> show ethernet-switching table | match 5e:00

Verify that the VTEP interface (vtep.32775) maps to the Layer 3 loopback address (10.20.20.2):

user@leaf-0> show interfaces vtep.32771

Verify that the OVSDB MAC addresses map to the Layer 3 loopback address (10.20.20.2):

user@leaf-0> show ovsdb mac | match 00:5e:00

Meaning

The spine devices are acting as a Layer 3 gateway for the leaf devices as expected.

Spine: Verifying the Layer 3 Gateway Path

Purpose

Verify that the spine devices receive ARP entries for both virtual networks.

Action

Verify that the ARP table on Spine 1 shows entries for both virtual networks (10.1.1.x and 10.2.2.x):

user@spine-1> show arp no-resolve | match vtep

Meaning

The spine devices receive ARP entries for both virtual networks as expected.

Leaf: Verifying Connectivity to the Compute Nodes

Purpose

Verify connectivity to the compute nodes.

Action

Verify connectivity to the compute nodes from the corresponding leaf devices:

Verify Connectivity from Leaf 0 to Host 6

user@leaf-0> show arp no-resolve | match 10.35.35

Verify Connectivity from Leaf 1 to Host 7

user@leaf-1> show arp no-resolve | match 10.36.36

Meaning

The leaf devices can reach the corresponding compute nodes.

Leaf: Verifying Tunnel Creation

Purpose

Verify that packets can travel end to end across the VXLAN tunnels and that VTEP interfaces are operational.

Action

Verify connectivity between virtual machines vm-10000 and vm-20000 by issuing a ping operation or send traffic across the VXLAN tunnels, then check for the presence of a remote VTEP on the leaf devices.

Verify the Remote VTEP for Leaf 0

user@leaf-0> show ethernet-switching vxlan-tunnel-end-point remote

Verify the Remote VTEP for Leaf 1

user@leaf-1> show ethernet-switching vxlan-tunnel-end-point remote

Meaning

The remote VTEPs are in place and the traffic can travel over the VXLAN tunnel as expected.