Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring the Software as a Service Solution

 

This example describes how to build large IP fabrics using the Juniper Networks QFX10002 and QFX5100 line of switches.

Requirements

Table 1 lists the hardware and software components used in this example.

Table 1: Solution Hardware and Software Requirements

Device

Hardware

Software

Core routers

MX480

Junos OS Release 14.2R2.8

Fabric devices

QFX5100-24Q

Junos OS Release 14.1X53-D35.3

Spine devices

QFX10002-72Q*

15.1X53-D32.2

Leaf devices

QFX5100-48S, QFX5100-48T, and OCX1100**

Junos OS Release 14.1X53-D35.3

Servers

IBM Flex and IBMx3750

VMware ESXi 5.1

VMware vCenter 5.1

Junos Space Release 15.2

Network Director Release 2.5

* This solution has also been validated using QFX5100-24Q switches at the spine layer.

** The Juniper Networks OCX1100 switch is an open networking switch based on hardware specifications ratified by the Open Compute Project (OCP) Foundation.

Overview and Topology

The topology used in this example consists of a series of QFX10002, QFX5100, and OCX1100 devices, and two MX480 devices, as shown in Figure 1.

Figure 1: Software as a Service Solution Topology
Software as a Service Solution Topology

In this example, the leaf layer uses a combination of four QFX5100-48S, QFX5100-48T and OCX1100 switches. The spine layer uses four QFX10002-72Q switches, and the fabric layer uses four QFX5100-24Q switches. The core layer uses two MX480 routers. A series of servers are attached to the leaf layer to serve as typical data center end hosts.

Table 2 and Table 3 list the IP addressing used in this example.

Table 2: IPv4 Addressing

IPv4 Network Subnets

Network

Server to leaf links

172.16.0.0/16

Leaf to spine links

192.168.11.0/24

Spine to fabric links

192.168.13.0/24

Fabric to core links

192.168.14.0/24

Loopback IPs (for all devices)

10.0.6.0/16

Anycast IP address

10.1.1.1/24

Table 3: IPv6 Addressing

IPv6 Network Subnets

Network

Server to leaf links

2001:db8:2001:92:: to 2001:db8:2001:93::

Leaf to spine links

2001:db8:2001:1:: to 2001:db8:2001:20::

Spine to fabric links

2001:db8:2001:21:: to 2001:db8:2001:40::

Fabric to core links

2001:db8:2001:41:: to 2001:db8:2001:60::

Anycast IP address

2001:db8:2000::/64

Table 4 lists the AS numbering used in this example.

Table 4: BGP AS Numbering

Device

AS Number

Leaf-0

420006000

Leaf-1

420006001

Leaf-2

420006002

Leaf-3

420006003

Spine-0

420005000

Spine-1

420005001

Spine-2

420005002

Spine-3

420005003

Fabric-0

420005501

Fabric-1

420005502

Fabric-2

420005503

Fabric-3

420005504

Core-1 RED-vpn routing instance

420006501

Core-2 RED-vpn routing instance

420006502

Core-1

65000

Core-2

65000

Configuring an IP Fabric for the Software as a Service Solution

This example explains how to build out the leaf, spine, fabric, and core layers of an IP fabric for the Software as a Service (SaaS) solution. It includes the following sections:

Configuring Leaf Devices for the IP Fabric

CLI Quick Configuration

Juniper Networks provides tools to help automate the creation of spine-and-leaf IP fabrics for SaaS environments. This solution includes two options to help with IP fabric creation: OpenClos and Junos Space Network Director.

OpenClos is a Python script library that enables you automate the design, deployment, and maintenance of a Layer 3 fabric built on BGP. To create an IP fabric that uses a spine-and-leaf architecture, the script generates device configuration files and uses zero touch provisioning (ZTP) to push the configuration files to the devices.

OpenClos functionality has also been built into Network Director 2.0 (and later), which allows you to provision spine-and-leaf Layer 3 fabrics using a GUI-based wizard.

For this example, the main configuration elements for the leaf devices were created using Network Director. For more information on using Network Director or OpenClos for this solution , see Configuring an IP Fabric using Junos Space Network Director or OpenClos.

CLI-Equivalent Configuration

The following commands show the resulting configuration created by the Network Director Layer 3 Fabric wizard (or OpenClos). This example is for the first leaf device (Leaf-0):

### System configuration ###
set system host-name cloud-saas-leaf-0
set system time-zone America/Los_Angeles
set system root-authentication encrypted-password [##hash##]
set system services ssh root-login allow
set system services ssh max-sessions-per-connection 32
set system services netconf ssh
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system syslog file default-log-messages any any
set system syslog file default-log-messages match "(requested 'commit' operation) | (copying configuration to juniper.save) | (commit complete) | ifAdminStatus | (FRU power) | (FRU removal) | (FRU insertion) | (link UP) | transitioned | Transferred | transfer-file | (license add) | (license delete) | (package -X update) | (package -X delete) | (FRU Online) | (FRU Offline) | (plugged in) | (unplugged) | QF_NODE | QF_SERVER_NODE_GROUP | QF_INTERCONNECT | QF_DIRECTOR | QF_NETWORK_NODE_GROUP | (Master Unchanged, Members Changed) | (Master Changed, Members Changed) | (Master Detected, Members Changed) | (vc add) | (vc delete) | (Master detected) | (Master changed) | (Backup detected) | (Backup changed) | (interface vcp-)"
set system syslog file default-log-messages structured-data
set system extensions providers juniper license-type juniper deployment-scope commercial
set system extensions providers chef license-type juniper deployment-scope commercial
set system processes dhcp-service traceoptions file dhcp_logfile
set system processes dhcp-service traceoptions file size 10m
set system processes dhcp-service traceoptions level all
set system processes dhcp-service traceoptions flag all
set system processes app-engine-virtual-machine-management-service traceoptions level notice
set system processes app-engine-virtual-machine-management-service traceoptions flag all
### Leaf-to-spine interfaces ###
set interfaces et-0/0/48 mtu 9216
set interfaces et-0/0/48 unit 0 description facing_cloud-saas-spine-0
set interfaces et-0/0/48 unit 0 family inet mtu 9000
set interfaces et-0/0/48 unit 0 family inet address 192.168.11.1/31
set interfaces et-0/0/49 mtu 9216
set interfaces et-0/0/49 unit 0 description facing_cloud-saas-spine-1
set interfaces et-0/0/49 unit 0 family inet mtu 9000
set interfaces et-0/0/49 unit 0 family inet address 192.168.11.17/31
set interfaces et-0/0/50 mtu 9216
set interfaces et-0/0/50 unit 0 description facing_cloud-saas-spine-2
set interfaces et-0/0/50 unit 0 family inet mtu 9000
set interfaces et-0/0/50 unit 0 family inet address 192.168.11.33/31
set interfaces et-0/0/51 mtu 9216
set interfaces et-0/0/51 unit 0 description facing_cloud-saas-spine-3
set interfaces et-0/0/51 unit 0 family inet mtu 9000
set interfaces et-0/0/51 unit 0 family inet address 192.168.11.49/31
### Server-facing interfaces ###
set interfaces xe-0/0/0 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/0 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/1 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/1 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/2 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/2 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/3 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/3 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/4 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/4 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/5 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/5 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/6 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/6 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/7 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/7 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/8 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/8 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/9 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/9 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/10 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/10 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/11 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/11 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/12 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/12 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/13 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/13 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/14 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/14 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/15 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/15 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/16 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/16 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/17 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/17 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/18 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/18 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/19 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/19 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/20 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/20 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/21 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/21 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/22 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/22 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/23 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/23 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/24 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/24 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/25 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/25 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/26 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/26 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/27 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/27 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/28 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/28 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/29 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/29 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/30 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/30 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/31 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/31 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/32 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/32 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/33 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/33 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/34 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/34 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/35 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/35 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/36 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/36 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/37 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/37 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/38 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/38 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/39 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/39 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/40 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/40 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/41 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/41 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/42 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/42 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/43 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/43 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/44 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/44 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/45 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/45 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/46 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/46 unit 0 family ethernet-switching vlan members SERVER
set interfaces xe-0/0/47 unit 0 family ethernet-switching interface-mode access
set interfaces xe-0/0/47 unit 0 family ethernet-switching vlan members SERVER
### Loopback and management interfaces ###
set interfaces lo0 unit 0 family inet address 10.0.16.1/32
set interfaces vme unit 0 family inet address 10.94.47.5/28
### IRB for the local VLAN "SERVERS" ###
set interfaces irb mtu 9216
set interfaces irb unit 1 description LOCAL_SERVERS
set interfaces irb unit 1 family inet mtu 9000
set interfaces irb unit 1 family inet address 172.16.64.1/27
### VLANs ###
set vlans SERVER vlan-id 1
set vlans SERVER l3-interface irb.1
### Static routes & routing options ###
set routing-options static route 10.94.63.252/32 next-hop 10.94.47.14
set routing-options static route 10.94.63.253/32 next-hop 10.94.47.14
set routing-options static route 0.0.0.0/0 next-hop 10.94.47.14
set routing-options forwarding-table export PFE-LB
set routing-options router-id 10.0.16.1
set routing-options autonomous-system 420006000
### BGP configuration ###
set protocols bgp log-updown
set protocols bgp import bgp-clos-in
set protocols bgp export bgp-clos-out
set protocols bgp graceful-restart
set protocols bgp group CLOS type external
set protocols bgp group CLOS mtu-discovery
set protocols bgp group CLOS bfd-liveness-detection minimum-interval 250
set protocols bgp group CLOS bfd-liveness-detection multiplier 3
set protocols bgp group CLOS bfd-liveness-detection session-mode single-hop
set protocols bgp group CLOS multipath multiple-as
set protocols bgp group CLOS neighbor 192.168.11.0 peer-as 420005000
set protocols bgp group CLOS neighbor 192.168.11.16 peer-as 420005001
set protocols bgp group CLOS neighbor 192.168.11.32 peer-as 420005002
set protocols bgp group CLOS neighbor 192.168.11.48 peer-as 420005003
### Routing policy ###
set policy-options policy-statement PFE-LB then load-balance per-packet
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.16.0/28 orlonger
set policy-options policy-statement bgp-clos-in term loopbacks then accept
set policy-options policy-statement bgp-clos-in term server-L3-gw from route-filter 172.16.64.0/24 orlonger
set policy-options policy-statement bgp-clos-in term server-L3-gw then accept
set policy-options policy-statement bgp-clos-in term reject then reject
set policy-options policy-statement bgp-clos-out term loopback from protocol direct
set policy-options policy-statement bgp-clos-out term loopback from route-filter 10.0.16.1/32 orlonger
set policy-options policy-statement bgp-clos-out term loopback then next-hop self
set policy-options policy-statement bgp-clos-out term loopback then accept
set policy-options policy-statement bgp-clos-out term server-L3-gw from protocol direct
set policy-options policy-statement bgp-clos-out term server-L3-gw from route-filter 172.16.64.1/27 orlonger
set policy-options policy-statement bgp-clos-out term server-L3-gw then next-hop self
set policy-options policy-statement bgp-clos-out term server-L3-gw then accept
### LLDP ###
set protocols lldp interface all
### SNMP and event-options ###
set snmp community public authorization read-write
set snmp trap-group openclos_trap_group
set snmp trap-group networkdirector_trap_group version v2
set snmp trap-group networkdirector_trap_group destination-port 10162
set snmp trap-group networkdirector_trap_group categories authentication
set snmp trap-group networkdirector_trap_group categories link
set snmp trap-group networkdirector_trap_group categories services
set snmp trap-group networkdirector_trap_group targets 10.94.63.253
set snmp trap-group space targets 10.94.63.252
set event-options policy target_add_test events snmpd_trap_target_add_notice
set event-options policy target_add_test then raise-trap

Step-by-Step Procedure

Configuring Additional Leaf Device Elements

CLI Quick Configuration

To quickly configure the additional elements for the leaf devices, enter the following configuration statements on each device:

Note

The configuration shown here applies to device Leaf-0.

[edit]
set interfaces et-0/0/48 unit 0 family inet6 mtu 9000
set interfaces et-0/0/48 unit 0 family inet6 address 2001:db8:2001:1::1/126
set interfaces et-0/0/49 unit 0 family inet6 mtu 9000
set interfaces et-0/0/49 unit 0 family inet6 address 2001:db8:2001:2::1/126
set interfaces et-0/0/50 unit 0 family inet6 mtu 9000
set interfaces et-0/0/50 unit 0 family inet6 address 2001:db8:2001:3::1/126
set interfaces et-0/0/51 unit 0 family inet6 mtu 9000
set interfaces et-0/0/51 unit 0 family inet6 address 2001:db8:2001:4::1/126
set protocols bgp group CLOS-IPV6 bfd-liveness-detection minimum-interval 250
set protocols bgp group CLOS-IPV6 bfd-liveness-detection multiplier 3
set protocols bgp group CLOS-IPV6 multipath multiple-as
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:1::2 peer-as 420005000
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:2::2 peer-as 420005001
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:3::2 peer-as 420005002
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:4::2 peer-as 420005003
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.22.0/28 orlonger
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.23.0/28 orlonger
set routing-options graceful-restart

Step-by-Step Procedure

To configure the additional elements for the leaf devices:

  1. Configure IPv6 on the spine-facing interfaces:
    [edit]
    user@Leaf-0# set interfaces et-0/0/48 unit 0 family inet6 mtu 9000
    user@Leaf-0# set interfaces et-0/0/48 unit 0 family inet6 address 2001:db8:2001:1::1/126
    user@Leaf-0# set interfaces et-0/0/49 unit 0 family inet6 mtu 9000
    user@Leaf-0# set interfaces et-0/0/49 unit 0 family inet6 address 2001:db8:2001:2::1/126
    user@Leaf-0# set interfaces et-0/0/50 unit 0 family inet6 mtu 9000
    user@Leaf-0# set interfaces et-0/0/50 unit 0 family inet6 address 2001:db8:2001:3::1/126
    user@Leaf-0# set interfaces et-0/0/51 unit 0 family inet6 mtu 9000
    user@Leaf-0# set interfaces et-0/0/51 unit 0 family inet6 address 2001:db8:2001:4::1/126
  2. Configure IPv6 EBGP sessions with each spine device:
    [edit]
    user@Leaf-0# set protocols bgp group CLOS-IPV6 bfd-liveness-detection minimum-interval 250
    user@Leaf-0# set protocols bgp group CLOS-IPV6 bfd-liveness-detection multiplier 3
    user@Leaf-0# set protocols bgp group CLOS-IPV6 multipath multiple-as
    user@Leaf-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:1::2 peer-as 420005000
    user@Leaf-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:2::2 peer-as 420005001
    user@Leaf-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:3::2 peer-as 420005002
    user@Leaf-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:4::2 peer-as 420005003
  3. Configure additional routing policy elements to enable reachability to the loopback interfaces of the fabric and core devices:
    [edit]
    user@Leaf-0# set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.22.0/28 orlonger
    user@Leaf-0# set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.23.0/28 orlonger
  4. Configure graceful restart globally:
    [edit]
    user@Leaf-0# set routing-options graceful-restart

Configuring Server Access Options on Leaf Devices

Step-by-Step Procedure

The SaaS solution offers three methods for server connectivity, as shown in Figure 2.

Figure 2: SaaS Server Access Options
SaaS Server Access Options

Leaf devices can be configured to support server connectivity in three ways:

  • Anycast—Using a Layer 3 interface, with a BGP session between the server and the physical interface of the leaf device.

  • Unicast—Using a Layer 2 interface and VLAN, with no BGP session between the server and the leaf device. Servers use the IRB interface of the leaf device as their default gateway.

  • Hybrid—Using a Layer 2 interface and VLAN, with a BGP session between the server and the IRB interface of the leaf device.

Anycast Server Access Option

This method uses a Layer 3 interface, with a BGP session between the server and the physical interface of the leaf device.

To configure the anycast server access option:

  1. Configure the server-facing interfaces:
    [edit]
    user@Leaf-0# set interfaces xe-0/0/0 unit 0 family inet address 172.16.71.2/30
    user@Leaf-0# set interfaces xe-0/0/1 unit 0 family inet address 172.16.72.2/30
    user@Leaf-0# set interfaces xe-0/0/46 unit 0 family inet6 address 2001:db8:2001:92::2/126
    user@Leaf-0# set interfaces xe-0/0/47 unit 0 family inet6 address 2001:db8:2001:93::2/126
  2. Configure EBGP sessions with the servers:
    [edit]
    user@Leaf-0# set protocols bgp group Anycast bfd-liveness-detection minimum-interval 250
    user@Leaf-0# set protocols bgp group Anycast multipath multiple-as
    user@Leaf-0# set protocols bgp group Anycast allow all ## enables BGP autoprovisioning for additional connections
    user@Leaf-0# set protocols bgp group Anycast neighbor 172.16.71.1 local-address 172.16.71.2
    user@Leaf-0# set protocols bgp group Anycast neighbor 172.16.71.1 peer-as 420008501
    user@Leaf-0# set protocols bgp group Anycast neighbor 172.16.72.1 local-address 172.16.72.2
    user@Leaf-0# set protocols bgp group Anycast neighbor 172.16.72.1 peer-as 420008502
    user@Leaf-0# set protocols bgp group Anycast-IPV6 bfd-liveness-detection minimum-interval 250
    user@Leaf-0# set protocols bgp group Anycast-IPV6 multipath multiple-as
    user@Leaf-0# set protocols bgp group Anycast-IPV6 allow all ## enables BGP autoprovisioning for additional connections
    user@Leaf-0# set protocols bgp group Anycast-IPV6 neighbor 2001:db8:2001:92::1 peer-as 420008000
    user@Leaf-0# set protocols bgp group Anycast-IPV6 neighbor 2001:db8:2001:93::1 peer-as 420008001

Step-by-Step Procedure

Unicast Server Access Option

This method uses a Layer 2 interface and VLAN, with no BGP session between the server and the leaf device. Servers use the IRB interface of the leaf device as their default gateway.

To configure the unicast server access option:

  1. Configure the Layer 2 server-facing interfaces, and associate the interfaces to VLAN SERVER:
    [edit]
    user@Leaf-0# set interfaces xe-0/0/0 unit 0 family ethernet-switching interface-mode access
    user@Leaf-0# set interfaces xe-0/0/0 unit 0 family ethernet-switching vlan members SERVER
    user@Leaf-0# set interfaces xe-0/0/1 unit 0 family ethernet-switching interface-mode access
    user@Leaf-0# set interfaces xe-0/0/1 unit 0 family ethernet-switching vlan members SERVER
  2. Configure an IRB interface to act as the default gateway for the servers:
    [edit]
    user@Leaf-0# set interfaces irb mtu 9216
    user@Leaf-0# set interfaces irb unit 1 description LOCAL_SERVERS
    user@Leaf-0# set interfaces irb unit 1 family inet mtu 9000
    user@Leaf-0# set interfaces irb unit 1 family inet address 172.16.64.1/27
  3. Configure VLAN SERVER to aggregate the server-facing Layer 2 interfaces and associate them with the IRB interface:
    [edit]
    user@Leaf-0# set vlans SERVER vlan-id 1
    user@Leaf-0# set vlans SERVER l3-interface irb.1

Step-by-Step Procedure

Hybrid Server Access Option

This method uses a Layer 2 interface and VLAN, with a BGP session between the server and the IRB interface of the leaf device.

To configure the hybrid server access option:

  1. Configure the Layer 2 server-facing interfaces, and associate the interfaces to VLAN hybrid:
    [edit]
    user@Leaf-0# set interfaces xe-0/0/0 unit 0 family ethernet-switching vlan members hybrid
    user@Leaf-0# set interfaces xe-0/0/1 unit 0 family ethernet-switching vlan members hybrid
  2. Configure an IRB interface to act as the peering point for BGP connections with the servers:
    [edit]
    user@Leaf-0# set interfaces irb mtu 9216
    user@Leaf-0# set interfaces irb mtu 9216 set interfaces irb unit 100 description Hybrid
    user@Leaf-0# set interfaces irb unit 100 family inet mtu 9000
    user@Leaf-0# set interfaces irb unit 100 family inet address 172.16.73.2/24
  3. Configure VLAN hybrid to aggregate the server-facing Layer 2 interfaces, and associate them with the IRB interface:
    [edit]
    user@Leaf-0# set vlans hybrid vlan-id 100
    user@Leaf-0# set vlans hybrid l3-interface irb.100
  4. Configure EBGP (with BFD) sessions with the servers:
    [edit]
    user@Leaf-0# set protocols bgp group Hybrid bfd-liveness-detection minimum-interval 350
    user@Leaf-0# set protocols bgp group Hybrid bfd-liveness-detection multiplier 3
    user@Leaf-0# set protocols bgp group Hybrid bfd-liveness-detection session-mode single-hop
    user@Leaf-0# set protocols bgp group Hybrid multipath multiple-as
    user@Leaf-0# set protocols bgp group Hybrid allow all ## enables BGP autoprovisioning for additional connections
    user@Leaf-0# set protocols bgp group Hybrid neighbor 172.16.73.1 local-address 172.16.73.2
    user@Leaf-0# set protocols bgp group Hybrid neighbor 172.16.73.1 peer-as 420006503
    user@Leaf-0# set protocols bgp group Hybrid neighbor 172.16.73.3 local-address 172.16.73.2
    user@Leaf-0# set protocols bgp group Hybrid neighbor 172.16.73.3 peer-as 420006504

Configuring Server Load Balancing Using Anycast

Step-by-Step Procedure

In data centers, a common way to increase the capacity, and availability, of applications and services is to duplicate them across multiple servers, and assign all servers the same address to take advantage of anycast’s inherent load balancing capability. Separation of applications can be achieved by running each application on a different group of servers, and assigning a unique anycast address to each server group.

Figure 3: SaaS Anycast Server Load Balancing
SaaS Anycast Server Load
Balancing

In the example shown in Figure 3, the leaf device has multiple Layer 3 interfaces connected to multiple servers. Each server has a separate BGP session established with the leaf device, and all servers are using the same anycast IP address.

To support server load balancing for anycast traffic, leaf devices can be configured with three configuration elements:

  • A per-flow balancing policy applied to the forwarding table (configured earlier in this example)

  • ECMP, with resilient hashing

  • BGP, using multipath

To configure load balancing across multiple servers:

  1. Configure the server-facing interfaces:
    [edit]
    user@Leaf-0# set interfaces xe-0/0/2 unit 0 family inet address 172.16.152.1/30
    user@Leaf-0# set interfaces xe-0/0/3 unit 0 family inet address 172.16.153.1/30
    user@Leaf-0# set interfaces xe-0/0/4 unit 0 family inet address 172.16.154.1/30
    user@Leaf-0# set interfaces xe-0/0/5 unit 0 family inet address 172.16.155.1/30
    user@Leaf-0# set interfaces xe-0/0/6 unit 0 family inet address 172.16.156.1/30
    user@Leaf-0# set interfaces xe-0/0/7 unit 0 family inet address 172.16.157.1/30
    user@Leaf-0# set interfaces xe-0/0/8 unit 0 family inet address 172.16.158.1/30
    user@Leaf-0# set interfaces xe-0/0/9 unit 0 family inet address 172.16.159.1/30
    user@Leaf-0# set interfaces xe-0/0/10 unit 0 family inet address 172.16.160.1/30
    user@Leaf-0# set interfaces xe-0/0/11 unit 0 family inet address 172.16.161.1/30
    user@Leaf-0# set interfaces xe-0/0/12 unit 0 family inet address 172.16.162.1/30
    user@Leaf-0# set interfaces xe-0/0/13 unit 0 family inet address 172.16.163.1/30
    user@Leaf-0# set interfaces xe-0/0/14 unit 0 family inet address 172.16.164.1/30
    user@Leaf-0# set interfaces xe-0/0/15 unit 0 family inet address 172.16.165.1/30
    user@Leaf-0# set interfaces xe-0/0/16 unit 0 family inet address 172.16.166.1/30
    user@Leaf-0# set interfaces xe-0/0/17 unit 0 family inet address 172.16.167.1/30
    user@Leaf-0# set interfaces xe-0/0/18 unit 0 family inet address 172.16.168.1/30
    user@Leaf-0# set interfaces xe-0/0/19 unit 0 family inet address 172.16.169.1/30
    user@Leaf-0# set interfaces xe-0/0/20 unit 0 family inet address 172.16.170.1/30
    user@Leaf-0# set interfaces xe-0/0/21 unit 0 family inet address 172.16.171.1/30
    user@Leaf-0# set interfaces xe-0/0/22 unit 0 family inet address 172.16.172.1/30
    user@Leaf-0# set interfaces xe-0/0/23 unit 0 family inet address 172.16.173.1/30
    user@Leaf-0# set interfaces xe-0/0/24 unit 0 family inet address 172.16.174.1/30
    user@Leaf-0# set interfaces xe-0/0/25 unit 0 family inet address 172.16.175.1/30
    user@Leaf-0# set interfaces xe-0/0/26 unit 0 family inet address 172.16.176.1/30
    user@Leaf-0# set interfaces xe-0/0/27 unit 0 family inet address 172.16.177.1/30
    user@Leaf-0# set interfaces xe-0/0/28 unit 0 family inet address 172.16.178.1/30
    user@Leaf-0# set interfaces xe-0/0/29 unit 0 family inet address 172.16.179.1/30
    user@Leaf-0# set interfaces xe-0/0/30 unit 0 family inet address 172.16.180.1/30
    user@Leaf-0# set interfaces xe-0/0/31 unit 0 family inet address 172.16.181.1/30
    user@Leaf-0# set interfaces xe-0/0/32 unit 0 family inet address 172.16.182.1/30
    user@Leaf-0# set interfaces xe-0/0/33 unit 0 family inet address 172.16.183.1/30
    user@Leaf-0# set interfaces xe-0/0/34 unit 0 family inet address 172.16.184.1/30
    user@Leaf-0# set interfaces xe-0/0/35 unit 0 family inet address 172.16.185.1/30
    user@Leaf-0# set interfaces xe-0/0/36 unit 0 family inet address 172.16.186.1/30
    user@Leaf-0# set interfaces xe-0/0/37 unit 0 family inet address 172.16.187.1/30
    user@Leaf-0# set interfaces xe-0/0/38 unit 0 family inet address 172.16.188.1/30
    user@Leaf-0# set interfaces xe-0/0/39 unit 0 family inet address 172.16.189.1/30
    user@Leaf-0# set interfaces xe-0/0/40 unit 0 family inet address 172.16.190.1/30
    user@Leaf-0# set interfaces xe-0/0/41 unit 0 family inet address 172.16.191.1/30
    user@Leaf-0# set interfaces xe-0/0/42 unit 0 family inet address 172.16.192.1/30
    user@Leaf-0# set interfaces xe-0/0/43 unit 0 family inet address 172.16.193.1/30
    user@Leaf-0# set interfaces xe-0/0/44 unit 0 family inet address 172.16.194.1/30
    user@Leaf-0# set interfaces xe-0/0/45 unit 0 family inet address 172.16.195.1/30
  2. Configure ECMP:
    [edit]
    user@Leaf-0# set chassis maximum-ecmp 64
    user@Leaf-0# set forwarding-options enhanced-hash-key ecmp-resilient-hash
  3. Configure EBGP (with BFD) sessions with the servers:
    [edit]
    user@Leaf-0# set protocols bgp group LB-Anycast bfd-liveness-detection minimum-interval 250
    user@Leaf-0# set protocols bgp group LB-Anycast bfd-liveness-detection session-mode single-hop
    user@Leaf-0# set protocols bgp group LB-Anycast multipath
    user@Leaf-0# set protocols bgp group LB-Anycast allow all ## enables BGP autoprovisioning for additional connections
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.152.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.153.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.154.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.155.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.156.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.157.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.158.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.159.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.160.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.161.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.162.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.163.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.164.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.165.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.166.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.167.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.168.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.169.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.170.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.171.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.172.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.173.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.174.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.175.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.176.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.177.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.178.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.179.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.180.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.181.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.182.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.183.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.184.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.185.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.186.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.187.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.188.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.189.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.190.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.191.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.192.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.193.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.194.2 peer-as 420006700
    user@Leaf-0# set protocols bgp group LB-Anycast neighbor 172.16.195.2 peer-as 420006700
  4. Configure routing policies to advertise the anycast IPv4 address (10.1.1.1) and anycast IPv6 address (2001:db8:2000::):
    [edit]
    user@Leaf-0# set policy-options policy-statement bgp-clos-out term LB-Anycast4-route from protocol bgp
    user@Leaf-0# set policy-options policy-statement bgp-clos-out term LB-Anycast4-route from route-filter 10.1.1.1/24 exact
    user@Leaf-0# set policy-options policy-statement bgp-clos-out term LB-Anycast4-route then accept
    user@Leaf-0# set policy-options policy-statement bgp-clos-out term LB-Anycast6-route from protocol bgp
    user@Leaf-0# set policy-options policy-statement bgp-clos-out term LB-Anycast6-route from route-filter 2001:db8:2000::/64 exact
    user@Leaf-0# set policy-options policy-statement bgp-clos-out term LB-Anycast6-route then accept

Configuring Spine Devices for the IP Fabric

CLI Quick Configuration

As noted earlier, Juniper Networks provides tools (OpenClos and Junos Space Network Director) to help automate the creation of spine-and-leaf IP fabrics for SaaS environments.

For this example, the main configuration elements for the spine devices were created using Network Director. For more information on using Network Director or OpenClos for this solution , see Configuring an IP Fabric using Junos Space Network Director or OpenClos.

CLI-Equivalent Configuration

The following commands show the resulting configuration created by the Network Director Layer 3 Fabric wizard (or OpenClos). This example is for the first spine device (Spine-0):

### System configuration ###
set system host-name cloud-saas-spine-0
set system time-zone America/Los_Angeles
set system root-authentication encrypted-password [##hash##]
set system services ssh root-login allow
set system services ssh max-sessions-per-connection 32
set system services netconf ssh
set system syslog user * any emergency
set system syslog file messages any notice
set system syslog file messages authorization info
set system syslog file interactive-commands interactive-commands any
set system syslog file default-log-messages any any
set system syslog file default-log-messages match "(requested 'commit' operation) | (copying configuration to juniper.save) | (commit complete) | ifAdminStatus | (FRU power) | (FRU removal) | (FRU insertion) | (link UP) | transitioned | Transferred | transfer-file | (license add) | (license delete) | (package -X update) | (package -X delete) | (FRU Online) | (FRU Offline) | (plugged in) | (unplugged) | QF_NODE | QF_SERVER_NODE_GROUP | QF_INTERCONNECT | QF_DIRECTOR | QF_NETWORK_NODE_GROUP | (Master Unchanged, Members Changed) | (Master Changed, Members Changed) | (Master Detected, Members Changed) | (vc add) | (vc delete) | (Master detected) | (Master changed) | (Backup detected) | (Backup changed) | (interface vcp-)"
set system syslog file default-log-messages structured-data
set system extensions providers juniper license-type juniper deployment-scope commercial
set system extensions providers chef license-type juniper deployment-scope commercial
set system processes dhcp-service traceoptions file dhcp_logfile
set system processes dhcp-service traceoptions file size 10m
set system processes dhcp-service traceoptions level all
set system processes dhcp-service traceoptions flag all
set system processes app-engine-virtual-machine-management-service traceoptions level notice
set system processes app-engine-virtual-machine-management-service traceoptions flag all
### Spine-to-leaf interfaces ###
set interfaces et-0/0/0 mtu 9216
set interfaces et-0/0/0 unit 0 description facing_cloud-saas-leaf-0
set interfaces et-0/0/0 unit 0 family inet mtu 9000
set interfaces et-0/0/0 unit 0 family inet address 192.168.11.0/31
set interfaces et-0/0/1 mtu 9216
set interfaces et-0/0/1 unit 0 description facing_cloud-saas-leaf-1
set interfaces et-0/0/1 unit 0 family inet mtu 9000
set interfaces et-0/0/1 unit 0 family inet address 192.168.11.2/31
set interfaces et-0/0/2 mtu 9216
set interfaces et-0/0/2 unit 0 description facing_cloud-saas-leaf-2
set interfaces et-0/0/2 unit 0 family inet mtu 9000
set interfaces et-0/0/2 unit 0 family inet address 192.168.11.4/31
set interfaces et-0/0/3 mtu 9216
set interfaces et-0/0/3 unit 0 description facing_cloud-saas-leaf-3
set interfaces et-0/0/3 unit 0 family inet mtu 9000
set interfaces et-0/0/3 unit 0 family inet address 192.168.11.6/31
set interfaces et-0/0/4 mtu 9216
set interfaces et-0/0/4 unit 0 description facing_cloud-saas-leaf-4
set interfaces et-0/0/4 unit 0 family inet mtu 9000
set interfaces et-0/0/4 unit 0 family inet address 192.168.11.8/31
set interfaces et-0/0/5 mtu 9216
set interfaces et-0/0/5 unit 0 description facing_cloud-saas-leaf-5
set interfaces et-0/0/5 unit 0 family inet mtu 9000
set interfaces et-0/0/5 unit 0 family inet address 192.168.11.10/31
### Loopback and management interfaces ###
set interfaces lo0 unit 0 family inet address 10.0.16.9/32
set interfaces vme unit 0 family inet address 10.94.47.1/28
### Static routes & routing options ###
set routing-options static route 10.94.63.252/32 next-hop 10.94.47.14
set routing-options static route 10.94.63.253/32 next-hop 10.94.47.14
set routing-options static route 0.0.0.0/0 next-hop 10.94.47.14
set routing-options forwarding-table export PFE-LB
set routing-options router-id 10.0.16.9
set routing-options autonomous-system 420005000
### BGP configuration ###
set protocols bgp log-updown
set protocols bgp import bgp-clos-in
set protocols bgp export bgp-clos-out
set protocols bgp graceful-restart
set protocols bgp group CLOS type external
set protocols bgp group CLOS mtu-discovery
set protocols bgp group CLOS bfd-liveness-detection minimum-interval 250
set protocols bgp group CLOS bfd-liveness-detection multiplier 3
set protocols bgp group CLOS bfd-liveness-detection session-mode single-hop
set protocols bgp group CLOS multipath multiple-as
set protocols bgp group CLOS neighbor 192.168.11.1 peer-as 420006000
set protocols bgp group CLOS neighbor 192.168.11.3 peer-as 420006001
set protocols bgp group CLOS neighbor 192.168.11.5 peer-as 420006002
set protocols bgp group CLOS neighbor 192.168.11.7 peer-as 420006003
set protocols bgp group CLOS neighbor 192.168.11.9 peer-as 420006004
set protocols bgp group CLOS neighbor 192.168.11.11 peer-as 420006005
### Routing policy ###
set policy-options policy-statement PFE-LB then load-balance per-packet
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.16.0/28 orlonger
set policy-options policy-statement bgp-clos-in term loopbacks then accept
set policy-options policy-statement bgp-clos-in term server-L3-gw from route-filter 172.16.64.0/24 orlonger
set policy-options policy-statement bgp-clos-in term server-L3-gw then accept
set policy-options policy-statement bgp-clos-in term reject then reject
set policy-options policy-statement bgp-clos-out term loopback from protocol direct
set policy-options policy-statement bgp-clos-out term loopback from route-filter 10.0.16.0/28 orlonger
set policy-options policy-statement bgp-clos-out term loopback then next-hop self
set policy-options policy-statement bgp-clos-out term loopback then accept
set policy-options policy-statement bgp-clos-out term server-L3-gw from protocol direct
set policy-options policy-statement bgp-clos-out term server-L3-gw from route-filter 172.16.64.0/24 orlonger
set policy-options policy-statement bgp-clos-out term server-L3-gw then next-hop self
set policy-options policy-statement bgp-clos-out term server-L3-gw then accept
### LLDP ###
set protocols lldp interface all
### SNMP and event-options ###
set snmp community public authorization read-write
set snmp trap-group networkdirector_trap_group version v2
set snmp trap-group networkdirector_trap_group destination-port 10162
set snmp trap-group networkdirector_trap_group categories authentication
set snmp trap-group networkdirector_trap_group categories link
set snmp trap-group networkdirector_trap_group categories services
set snmp trap-group networkdirector_trap_group targets 10.94.63.253
set snmp trap-group space targets 10.94.63.252
set event-options policy target_add_test events snmpd_trap_target_add_notice
set event-options policy target_add_test then raise-trap

Configuring Additional Spine Device Elements

CLI Quick Configuration

To quickly configure additional elements for the spine devices, enter the following configuration statements on each device:

Note

The configuration shown here applies to device Spine-0.

[edit]
set interfaces et-0/0/0 unit 0 family inet6 mtu 9000
set interfaces et-0/0/0 unit 0 family inet6 address 2001:db8:2001:1::2/126
set interfaces et-0/0/1 unit 0 family inet6 mtu 9000
set interfaces et-0/0/1 unit 0 family inet6 address 2001:db8:2001:5::2/126
set interfaces et-0/0/2 unit 0 family inet6 mtu 9000
set interfaces et-0/0/2 unit 0 family inet6 address 2001:db8:2001:9::2/126
set interfaces et-0/0/3 unit 0 family inet6 mtu 9000
set interfaces et-0/0/3 unit 0 family inet6 address 2001:db8:2001:13::2/126
set interfaces et-0/2/0 unit 0 family inet address 192.168.13.13/30
set interfaces et-0/2/0 unit 0 family inet6 address 2001:db8:2001:24::1/126
set interfaces et-0/2/1 unit 0 family inet address 192.168.13.9/30
set interfaces et-0/2/1 unit 0 family inet6 address 2001:db8:2001:23::1/126
set interfaces et-0/2/2 unit 0 family inet address 192.168.13.5/30
set interfaces et-0/2/2 unit 0 family inet6 address 2001:db8:2001:22::1/126
set interfaces et-0/2/3 unit 0 family inet address 192.168.13.1/30
set interfaces et-0/2/3 unit 0 family inet6 address 2001:db8:2001:21::1/126
set protocols bgp group CLOS-IPV6 bfd-liveness-detection minimum-interval 250
set protocols bgp group CLOS-IPV6 bfd-liveness-detection multiplier 3
set protocols bgp group CLOS-IPV6 multipath multiple-as
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:1::1 peer-as 420006000
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:5::1 peer-as 420006001
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:9::1 peer-as 420006002
set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:13::1 peer-as 420006003
set protocols bgp group FABRIC bfd-liveness-detection minimum-interval 250
set protocols bgp group FABRIC multipath multiple-as
set protocols bgp group FABRIC neighbor 192.168.13.2 peer-as 420005501
set protocols bgp group FABRIC neighbor 192.168.13.6 peer-as 420005502
set protocols bgp group FABRIC neighbor 192.168.13.10 peer-as 420005503
set protocols bgp group FABRIC neighbor 192.168.13.14 peer-as 420005504
set protocols bgp group FABRIC-IPV6 bfd-liveness-detection minimum-interval 250
set protocols bgp group FABRIC-IPV6 multipath multiple-as
set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:21::2 peer-as 420005501
set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:22::2 peer-as 420005502
set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:23::2 peer-as 420005503
set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:24::2 peer-as 420005504
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.22.0/28 orlonger
set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.23.0/28 orlonger
set policy-options policy-statement bgp-clos-in term LB-Anycast4-route from protocol bgp
set policy-options policy-statement bgp-clos-in term LB-Anycast4-route from route-filter 10.1.1.1/24 exact
set policy-options policy-statement bgp-clos-in term LB-Anycast4-route then accept
user@Spine-0# insert policy-options policy-statement bgp-clos-in term LB-Anycast4-route before term reject
set policy-options policy-statement bgp-clos-in term LB-Anycast6-route from protocol bgp
set policy-options policy-statement bgp-clos-in term LB-Anycast6-route from route-filter 2001:db8:2000::/64 exact
set policy-options policy-statement bgp-clos-in term LB-Anycast6-route then accept
user@Spine-0# insert policy-options policy-statement bgp-clos-in term LB-Anycast6-route before term reject
set chassis maximum-ecmp 64
set forwarding-options enhanced-hash-key ecmp-resilient-hash
set routing-options graceful-restart

Step-by-Step Procedure

To configure the additional elements for the spine devices:

  1. Configure IPv6 on the leaf-facing interfaces:
    [edit]
    user@Spine-0# set interfaces et-0/0/0 unit 0 family inet6 mtu 9000
    user@Spine-0# set interfaces et-0/0/0 unit 0 family inet6 address 2001:db8:2001:1::2/126
    user@Spine-0# set interfaces et-0/0/1 unit 0 family inet6 mtu 9000
    user@Spine-0# set interfaces et-0/0/1 unit 0 family inet6 address 2001:db8:2001:5::2/126
    user@Spine-0# set interfaces et-0/0/2 unit 0 family inet6 mtu 9000
    user@Spine-0# set interfaces et-0/0/2 unit 0 family inet6 address 2001:db8:2001:9::2/126
    user@Spine-0# set interfaces et-0/0/3 unit 0 family inet6 mtu 9000
    user@Spine-0# set interfaces et-0/0/3 unit 0 family inet6 address 2001:db8:2001:13::2/126
  2. Configure IPv4 and IPv6 on the fabric-facing interfaces:
    user@Spine-0# set interfaces et-0/2/0 unit 0 family inet address 192.168.13.13/30
    user@Spine-0# set interfaces et-0/2/0 unit 0 family inet6 address 2001:db8:2001:24::1/126
    user@Spine-0# set interfaces et-0/2/1 unit 0 family inet address 192.168.13.9/30
    user@Spine-0# set interfaces et-0/2/1 unit 0 family inet6 address 2001:db8:2001:23::1/126
    user@Spine-0# set interfaces et-0/2/2 unit 0 family inet address 192.168.13.5/30
    user@Spine-0# set interfaces et-0/2/2 unit 0 family inet6 address 2001:db8:2001:22::1/126
    user@Spine-0# set interfaces et-0/2/3 unit 0 family inet address 192.168.13.1/30
    user@Spine-0# set interfaces et-0/2/3 unit 0 family inet6 address 2001:db8:2001:21::1/126
  3. Configure IPv6 EBGP sessions with each leaf device:
    [edit]
    user@Spine-0# set protocols bgp group CLOS-IPV6 bfd-liveness-detection minimum-interval 250
    user@Spine-0# set protocols bgp group CLOS-IPV6 bfd-liveness-detection multiplier 3
    user@Spine-0# set protocols bgp group CLOS-IPV6 multipath multiple-as
    user@Spine-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:1::1 peer-as 420006000
    user@Spine-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:5::1 peer-as 420006001
    user@Spine-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:9::1 peer-as 420006002
    user@Spine-0# set protocols bgp group CLOS-IPV6 neighbor 2001:db8:2001:13::1 peer-as 420006003
  4. Configure IPv4 and IPv6 EBGP sessions with each fabric device:
    [edit]
    user@Spine-0# set protocols bgp group FABRIC bfd-liveness-detection minimum-interval 250
    user@Spine-0# set protocols bgp group FABRIC multipath multiple-as
    user@Spine-0# set protocols bgp group FABRIC neighbor 192.168.13.2 peer-as 420005501
    user@Spine-0# set protocols bgp group FABRIC neighbor 192.168.13.6 peer-as 420005502
    user@Spine-0# set protocols bgp group FABRIC neighbor 192.168.13.10 peer-as 420005503
    user@Spine-0# set protocols bgp group FABRIC neighbor 192.168.13.14 peer-as 420005504
    user@Spine-0# set protocols bgp group FABRIC-IPV6 bfd-liveness-detection minimum-interval 250
    user@Spine-0# set protocols bgp group FABRIC-IPV6 multipath multiple-as
    user@Spine-0# set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:21::2 peer-as 420005501
    user@Spine-0# set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:22::2 peer-as 420005502
    user@Spine-0# set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:23::2 peer-as 420005503
    user@Spine-0# set protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:24::2 peer-as 420005504
  5. Configure additional route filters for the loopbacks term in the bgp-clos-in routing policy to enable reachability to the loopback interfaces of the fabric and core devices:
    [edit]
    user@Spine-0# set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.22.0/28 orlonger
    user@Spine-0# set policy-options policy-statement bgp-clos-in term loopbacks from route-filter 10.0.23.0/28 orlonger
  6. Configure additional terms for the bgp-clos-in routing policy to advertise the anycast IPv4 address (10.1.1.1) and anycast IPv6 address (2001:db8:2000::):
    [edit]
    user@Spine-0# set policy-options policy-statement bgp-clos-in term LB-Anycast4-route from protocol bgp
    user@Spine-0# set policy-options policy-statement bgp-clos-in term LB-Anycast4-route from route-filter 10.1.1.1/24 exact
    user@Spine-0# set policy-options policy-statement bgp-clos-in term LB-Anycast4-route then accept
    user@Spine-0# insert policy-options policy-statement bgp-clos-in term LB-Anycast4-route before term reject
    user@Spine-0# set policy-options policy-statement bgp-clos-in term LB-Anycast6-route from protocol bgp
    user@Spine-0# set policy-options policy-statement bgp-clos-in term LB-Anycast6-route from route-filter 2001:db8:2000::/64 exact
    user@Spine-0# set policy-options policy-statement bgp-clos-in term LB-Anycast6-route then accept
    user@Spine-0# insert policy-options policy-statement bgp-clos-in term LB-Anycast6-route before term reject
  7. Configure ECMP:
    [edit]
    user@Spine-0# set chassis maximum-ecmp 64
    user@Spine-0# set forwarding-options enhanced-hash-key ecmp-resilient-hash
  8. Configure graceful restart globally:
    [edit]
    user@Spine-0# set routing-options graceful-restart

Configuring Fabric Devices

CLI Quick Configuration

To quickly configure the fabric devices, enter the following configuration statements on each device:

Note

The configuration shown here applies to device Fabric-0.

[edit]
set groups int-global interfaces <*> mtu 9192
set chassis fpc 0 pic 0 port 0 channel-speed 10g
set interfaces apply-groups int-global
set interfaces et-0/0/20 unit 0 family inet address 192.168.13.2/30
set interfaces et-0/0/20 unit 0 family inet6 address 2001:db8:2001:21::2/126
set interfaces et-0/0/21 unit 0 family inet address 192.168.13.18/30
set interfaces et-0/0/21 unit 0 family inet6 address 2001:db8:2001:25::2/126
set interfaces et-0/0/22 unit 0 family inet address 192.168.13.34/30
set interfaces et-0/0/22 unit 0 family inet6 address 2001:db8:2001:29::2/126
set interfaces et-0/0/23 unit 0 family inet address 192.168.13.50/30
set interfaces et-0/0/23 unit 0 family inet6 address 2001:db8:2001:33::2/126
set interfaces xe-0/0/0:0 unit 0 family inet address 192.168.14.1/30
set interfaces xe-0/0/0:0 unit 0 family inet6 address 2001:db8:2001:51::1/126
set interfaces xe-0/0/0:3 unit 0 family inet address 192.168.14.5/30
set interfaces xe-0/0/0:3 unit 0 family inet6 address 2001:db8:2001:52::1/126
set interfaces lo0 unit 0 family inet address 10.0.22.1/32
set interfaces lo0 unit 0 family inet6 address 2001:db8:2003:1::1/128
set interfaces em0 unit 0 family inet address 10.94.191.64/24
set routing-options static route 10.94.63.252/32 next-hop 10.94.191.254
set routing-options static route 10.94.63.253/32 next-hop 10.94.191.254
set routing-options router-id 10.0.22.1
set routing-options autonomous-system 420005501
set protocols bgp group SPINE bfd-liveness-detection minimum-interval 250
set protocols bgp group SPINE multipath multiple-as
set protocols bgp group SPINE neighbor 192.168.13.1 peer-as 420005000
set protocols bgp group SPINE neighbor 192.168.13.17 peer-as 420005001
set protocols bgp group SPINE neighbor 192.168.13.33 peer-as 420005002
set protocols bgp group SPINE neighbor 192.168.13.49 peer-as 420005003
set protocols bgp group SPINE-IPV6 bfd-liveness-detection minimum-interval 250
set protocols bgp group SPINE-IPV6 multipath multiple-as
set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:21::1 peer-as 420005000
set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:25::1 peer-as 420005001
set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:29::1 peer-as 420005002
set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:33::1 peer-as 420005003
set protocols bgp bfd-liveness-detection minimum-interval 250
set protocols bgp group CORE multipath multiple-as
set protocols bgp group CORE neighbor 192.168.14.6 description to-r2-PE
set protocols bgp group CORE neighbor 192.168.14.6 peer-as 420006502
set protocols bgp group CORE neighbor 192.168.14.2 description to-r1-PE
set protocols bgp group CORE neighbor 192.168.14.2 peer-as 420006501
set protocols bgp group CORE-IPV6 bfd-liveness-detection minimum-interval 250
set protocols bgp group CORE-IPV6 multipath multiple-as
set protocols bgp group CORE-IPV6 neighbor 2001:db8:2001:51::2 peer-as 420006501
set protocols bgp group CORE-IPV6 neighbor 2001:db8:2001:52::2 peer-as 420006502
set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.16.0/28 orlonger
set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.22.0/28 orlonger
set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.23.0/28 orlonger
set policy-options policy-statement receive-loopbacks term loopbacks then accept
set policy-options policy-statement advertise-loopbacks term loopback from protocol direct
set policy-options policy-statement advertise-loopbacks term loopback from route-filter 10.0.22.0/28 orlonger
set policy-options policy-statement advertise-loopbacks term loopback then next-hop self
set policy-options policy-statement advertise-loopbacks term loopback then accept
set protocols bgp import receive-loopbacks
set protocols bgp export advertise-loopbacks
set policy-options policy-statement pfe-lb then load-balance per-packet
set routing-options forwarding-table export pfe-lb
set chassis maximum-ecmp 64
set forwarding-options enhanced-hash-key ecmp-resilient-hash
set protocols lldp interface all
set snmp community public authorization read-write
set snmp trap-group networkdirector_trap_group version v2
set snmp trap-group networkdirector_trap_group destination-port 10162
set snmp trap-group networkdirector_trap_group categories authentication
set snmp trap-group networkdirector_trap_group categories link
set snmp trap-group networkdirector_trap_group categories services
set snmp trap-group networkdirector_trap_group targets 10.94.63.253
set snmp trap-group space targets 10.94.63.252
set event-options policy target_add_test events snmpd_trap_target_add_notice
set event-options policy target_add_test then raise-trap

Step-by-Step Procedure

To configure the fabric devices:

  1. Configure an interface group to set the MTU value for all interfaces:
    [edit]
    user@Fabric-0# set groups int-global interfaces <*> mtu 9192
  2. Configure IPv4 and IPv6 on the spine-facing interfaces:
    [edit]
    user@Fabric-0# set chassis fpc 0 pic 0 port 0 channel-speed 10g
    user@Fabric-0# set interfaces apply-groups int-global
    user@Fabric-0# set interfaces et-0/0/20 unit 0 family inet address 192.168.13.2/30
    user@Fabric-0# set interfaces et-0/0/20 unit 0 family inet6 address 2001:db8:2001:21::2/126
    user@Fabric-0# set interfaces et-0/0/21 unit 0 family inet address 192.168.13.18/30
    user@Fabric-0# set interfaces et-0/0/21 unit 0 family inet6 address 2001:db8:2001:25::2/126
    user@Fabric-0# set interfaces et-0/0/22 unit 0 family inet address 192.168.13.34/30
    user@Fabric-0# set interfaces et-0/0/22 unit 0 family inet6 address 2001:db8:2001:29::2/126
    user@Fabric-0# set interfaces et-0/0/23 unit 0 family inet address 192.168.13.50/30
    user@Fabric-0# set interfaces et-0/0/23 unit 0 family inet6 address 2001:db8:2001:33::2/126
  3. Configure IPv4 and IPv6 on the core-facing interfaces:
    [edit]
    user@Fabric-0# set interfaces xe-0/0/0:0 unit 0 family inet address 192.168.14.1/30
    user@Fabric-0# set interfaces xe-0/0/0:0 unit 0 family inet6 address 2001:db8:2001:51::1/126
    user@Fabric-0# set interfaces xe-0/0/0:3 unit 0 family inet address 192.168.14.5/30
    user@Fabric-0# set interfaces xe-0/0/0:3 unit 0 family inet6 address 2001:db8:2001:52::1/126
  4. Configure the loopback and management interfaces:
    [edit]
    user@Fabric-0# set interfaces lo0 unit 0 family inet address 10.0.22.1/32
    user@Fabric-0# set interfaces lo0 unit 0 family inet6 address 2001:db8:2003:1::1/128
    user@Fabric-0# set interfaces em0 unit 0 family inet address 10.94.191.64/24
  5. Configure static routes and routing options:
    [edit]
    user@Fabric-0# set routing-options static route 10.94.63.252/32 next-hop 10.94.191.254
    user@Fabric-0# set routing-options static route 10.94.63.253/32 next-hop 10.94.191.254
    user@Fabric-0# set routing-options router-id 10.0.22.1
    user@Fabric-0# set routing-options autonomous-system 420005501
  6. Configure IPv4 and IPv6 EBGP sessions with each spine device:
    [edit]
    user@Fabric-0# set protocols bgp group SPINE bfd-liveness-detection minimum-interval 250
    user@Fabric-0# set protocols bgp group SPINE multipath multiple-as
    user@Fabric-0# set protocols bgp group SPINE neighbor 192.168.13.1 peer-as 420005000
    user@Fabric-0# set protocols bgp group SPINE neighbor 192.168.13.17 peer-as 420005001
    user@Fabric-0# set protocols bgp group SPINE neighbor 192.168.13.33 peer-as 420005002
    user@Fabric-0# set protocols bgp group SPINE neighbor 192.168.13.49 peer-as 420005003
    user@Fabric-0# set protocols bgp group SPINE-IPV6 bfd-liveness-detection minimum-interval 250
    user@Fabric-0# set protocols bgp group SPINE-IPV6 multipath multiple-as
    user@Fabric-0# set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:21::1 peer-as 420005000
    user@Fabric-0# set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:25::1 peer-as 420005001
    user@Fabric-0# set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:29::1 peer-as 420005002
    user@Fabric-0# set protocols bgp group SPINE-IPV6 neighbor 2001:db8:2001:33::1 peer-as 420005003
  7. Configure IPv4 and IPv6 EBGP sessions with each core router:
    [edit]
    user@Fabric-0# set protocols bgp bfd-liveness-detection minimum-interval 250
    user@Fabric-0# set protocols bgp group CORE multipath multiple-as
    user@Fabric-0# set protocols bgp group CORE neighbor 192.168.14.6 description to-r2-PE
    user@Fabric-0# set protocols bgp group CORE neighbor 192.168.14.6 peer-as 420006502
    user@Fabric-0# set protocols bgp group CORE neighbor 192.168.14.2 description to-r1-PE
    user@Fabric-0# set protocols bgp group CORE neighbor 192.168.14.2 peer-as 420006501
    user@Fabric-0# set protocols bgp group CORE-IPV6 bfd-liveness-detection minimum-interval 250
    user@Fabric-0# set protocols bgp group CORE-IPV6 multipath multiple-as
    user@Fabric-0# set protocols bgp group CORE-IPV6 neighbor 2001:db8:2001:51::2 peer-as 420006501
    user@Fabric-0# set protocols bgp group CORE-IPV6 neighbor 2001:db8:2001:52::2 peer-as 420006502
  8. Configure and apply routing policies to enable reachability to the loopback interfaces of the other devices in the IP fabric:
    [edit]
    user@Fabric-0# set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.16.0/28 orlonger
    user@Fabric-0# set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.22.0/28 orlonger
    user@Fabric-0# set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.23.0/28 orlonger
    user@Fabric-0# set policy-options policy-statement receive-loopbacks term loopbacks then accept
    user@Fabric-0# set policy-options policy-statement advertise-loopbacks term loopback from protocol direct
    user@Fabric-0# set policy-options policy-statement advertise-loopbacks term loopback from route-filter 10.0.22.0/28 orlonger
    user@Fabric-0# set policy-options policy-statement advertise-loopbacks term loopback then next-hop self
    user@Fabric-0# set policy-options policy-statement advertise-loopbacks term loopback then accept
    user@Fabric-0# set protocols bgp import receive-loopbacks
    user@Fabric-0# set protocols bgp export advertise-loopbacks
  9. Configure per-flow load balancing and ECMP:
    [edit]
    user@Fabric-0# set policy-options policy-statement pfe-lb then load-balance per-packet
    user@Fabric-0# set routing-options forwarding-table export pfe-lb
    user@Fabric-0# set chassis maximum-ecmp 64
    user@Fabric-0# set forwarding-options enhanced-hash-key ecmp-resilient-hash
  10. Configure LLDP:
    [edit]
    user@Fabric-0# set protocols lldp interface all
  11. Configure SNMP and event options:
    [edit]
    user@Fabric-0# set snmp community public authorization read-write
    user@Fabric-0# set snmp trap-group networkdirector_trap_group version v2
    user@Fabric-0# set snmp trap-group networkdirector_trap_group destination-port 10162
    user@Fabric-0# set snmp trap-group networkdirector_trap_group categories authentication
    user@Fabric-0# set snmp trap-group networkdirector_trap_group categories link
    user@Fabric-0# set snmp trap-group networkdirector_trap_group categories services
    user@Fabric-0# set snmp trap-group networkdirector_trap_group targets 10.94.63.253
    user@Fabric-0# set snmp trap-group space targets 10.94.63.252
    user@Fabric-0# set event-options policy target_add_test events snmpd_trap_target_add_notice
    user@Fabric-0# set event-options policy target_add_test then raise-trap

Configuring Core Routers

CLI Quick Configuration

To quickly configure the core routers, enter the following configuration statements on each router:

Note

The configuration shown here applies to Router R1.

[edit]
set groups int-global interfaces <*> mtu 9192
set groups int-global interfaces <*> unit 0 family mpls
set chassis network-services enhanced-ip
set interfaces apply-groups int-global
set interfaces xe-1/0/0 unit 0 family inet address 192.168.14.18/30
set interfaces xe-1/0/0 unit 0 family inet6 address 2001:db8:2001:55::2/126
set interfaces xe-1/0/1 unit 0 family inet address 192.168.14.26/30
set interfaces xe-1/0/1 unit 0 family inet6 address 2001:db8:2001:57::2/126
set interfaces xe-1/0/2 unit 0 family inet address 192.168.14.2/30
set interfaces xe-1/0/2 unit 0 family inet6 address 2001:db8:2001:51::2/126
set interfaces xe-1/0/3 unit 0 family inet address 192.168.14.10/30
set interfaces xe-1/0/3 unit 0 family inet6 address 2001:db8:2001:53::2/126
set interfaces xe-1/3/1 unit 0 family inet address 192.168.15.5/30
set interfaces xe-1/3/2 unit 0 family inet address 192.168.15.9/30
set interfaces xe-1/3/3 unit 0 family inet address 192.168.15.1/30
set interfaces lo0 unit 0 family inet address 10.0.23.1/32
set interfaces fxp0 unit 0 family inet address 10.94.191.55/24
set routing-options static route 10.94.63.252/32 next-hop 10.94.191.254
set routing-options static route 10.94.63.253/32 next-hop 10.94.191.254
set routing-options router-id 10.0.23.1
set routing-options autonomous-system 65000
set protocols bgp multihop
set protocols bgp group to-r2-PE type internal
set protocols bgp group to-r2-PE local-address 10.0.23.1
set protocols bgp group to-r2-PE family inet-vpn unicast
set protocols bgp group to-r2-PE bfd-liveness-detection minimum-interval 250
set protocols bgp group to-r2-PE multipath
set protocols bgp group to-r2-PE neighbor 10.0.23.2
set protocols ospf traffic-engineering shortcuts
set protocols ospf area 0.0.0.0 interface all node-link-protection
set protocols ospf area 0.0.0.0 interface all bfd-liveness-detection minimum-interval 250
set protocols ospf area 0.0.0.0 interface fxp0.0 disable
set protocols rsvp interface all link-protection
set protocols mpls interface all
set protocols mpls label-switched-path to-r2 backup
set protocols mpls label-switched-path to-r2 to 10.0.23.2
set protocols mpls label-switched-path to-r2 standby
set protocols mpls label-switched-path to-r2 link-protection
set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.16.0/28 orlonger
set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.22.0/28 orlonger
set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.23.0/28 orlonger
set policy-options policy-statement receive-loopbacks term loopbacks then accept
set policy-options policy-statement advertise-loopbacks term loopback from protocol direct
set policy-options policy-statement advertise-loopbacks term loopback from route-filter 10.0.23.0/28 orlonger
set policy-options policy-statement advertise-loopbacks term loopback then next-hop self
set policy-options policy-statement advertise-loopbacks term loopback then accept
set policy-options policy-statement pfe-lb then load-balance per-packet
set routing-options forwarding-table export pfe-lb
set forwarding-options hash-key family inet layer-3
set forwarding-options hash-key family inet layer-4
set protocols lldp interface all
set protocols lldp interface fxp0 disable
set snmp community public authorization read-write
set snmp trap-group networkdirector_trap_group version v2
set snmp trap-group networkdirector_trap_group destination-port 10162
set snmp trap-group networkdirector_trap_group categories authentication
set snmp trap-group networkdirector_trap_group categories link
set snmp trap-group networkdirector_trap_group categories services
set snmp trap-group networkdirector_trap_group targets 10.94.63.253
set snmp trap-group space targets 10.94.63.252
set event-options policy target_add_test events snmpd_trap_target_add_notice
set event-options policy target_add_test then raise-trap
set routing-instances RED-vpn instance-type vrf
set routing-instances RED-vpn interface xe-1/0/0.0
set routing-instances RED-vpn interface xe-1/0/1.0
set routing-instances RED-vpn interface xe-1/0/2.0
set routing-instances RED-vpn interface xe-1/0/3.0
set routing-instances RED-vpn route-distinguisher 65001:1
set routing-instances RED-vpn vrf-target target:65001:1
set routing-instances RED-vpn protocols bgp import receive-loopbacks
set routing-instances RED-vpn protocols bgp export advertise-loopbacks
set routing-instances RED-vpn protocols bgp bfd-liveness-detection minimum-interval 250
set routing-instances RED-vpn protocols bgp group FABRIC local-as 420006501
set routing-instances RED-vpn protocols bgp group FABRIC multipath multiple-as
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.1 description Fabric-sw01
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.1 peer-as 420005501
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.9 description Fabric-sw02
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.9 peer-as 420005502
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.17 description Fabric-sw03
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.17 peer-as 420005503
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.25 description Fabric-sw04
set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.25 peer-as 420005504
set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 local-as 420006501
set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 multipath multiple-as
set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:51::1 peer-as 420005501
set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:53::1 peer-as 420005502
set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:55::1 peer-as 420005503
set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:57::1 peer-as 420005504

Step-by-Step Procedure

To configure the core routers:

  1. Configure an interface group to apply an MTU value and the MPLS protocol family to all interfaces:
    [edit]
    user@R1# set groups int-global interfaces <*> mtu 9192
    user@R1# set groups int-global interfaces <*> unit 0 family mpls
  2. Configure IPv4 and IPv6 on the fabric-facing interfaces:
    [edit]
    user@R1# set chassis network-services enhanced-ip
    user@R1# set interfaces apply-groups int-global
    user@R1# set interfaces xe-1/0/0 unit 0 family inet address 192.168.14.18/30
    user@R1# set interfaces xe-1/0/0 unit 0 family inet6 address 2001:db8:2001:55::2/126
    user@R1# set interfaces xe-1/0/1 unit 0 family inet address 192.168.14.26/30
    user@R1# set interfaces xe-1/0/1 unit 0 family inet6 address 2001:db8:2001:57::2/126
    user@R1# set interfaces xe-1/0/2 unit 0 family inet address 192.168.14.2/30
    user@R1# set interfaces xe-1/0/2 unit 0 family inet6 address 2001:db8:2001:51::2/126
    user@R1# set interfaces xe-1/0/3 unit 0 family inet address 192.168.14.10/30
    user@R1# set interfaces xe-1/0/3 unit 0 family inet6 address 2001:db8:2001:53::2/126
  3. Configure the interfaces towards the neighboring core router and the Internet:
    [edit]
    user@R1# set interfaces xe-1/3/1 unit 0 family inet address 192.168.15.5/30
    user@R1# set interfaces xe-1/3/2 unit 0 family inet address 192.168.15.9/30
    user@R1# set interfaces xe-1/3/3 unit 0 family inet address 192.168.15.1/30
  4. Configure the loopback and management interfaces:
    [edit]
    user@R1# set interfaces lo0 unit 0 family inet address 10.0.23.1/32
    user@R1# set interfaces fxp0 unit 0 family inet address 10.94.191.55/24
  5. Configure static routes and routing options:
    [edit]
    user@R1# set routing-options static route 10.94.63.252/32 next-hop 10.94.191.254
    user@R1# set routing-options static route 10.94.63.253/32 next-hop 10.94.191.254
    user@R1# set routing-options router-id 10.0.23.1
    user@R1# set routing-options autonomous-system 65000
  6. Configure an IBGP session with the neighboring core router:
    [edit]
    user@R1# set protocols bgp multihop
    user@R1# set protocols bgp group to-r2-PE type internal
    user@R1# set protocols bgp group to-r2-PE local-address 10.0.23.1
    user@R1# set protocols bgp group to-r2-PE family inet-vpn unicast
    user@R1# set protocols bgp group to-r2-PE bfd-liveness-detection minimum-interval 250
    user@R1# set protocols bgp group to-r2-PE multipath
    user@R1# set protocols bgp group to-r2-PE neighbor 10.0.23.2
  7. Configure OSPF:
    [edit]
    user@R1# set protocols ospf traffic-engineering shortcuts
    user@R1# set protocols ospf area 0.0.0.0 interface all node-link-protection
    user@R1# set protocols ospf area 0.0.0.0 interface all bfd-liveness-detection minimum-interval 250
    user@R1# set protocols ospf area 0.0.0.0 interface fxp0.0 disable
  8. Configure RSVP and MPLS, including an LSP to the neighboring core router:
    [edit]
    user@R1# set protocols rsvp interface all link-protection
    user@R1# set protocols mpls interface all
    user@R1# set protocols mpls label-switched-path to-r2 backup
    user@R1# set protocols mpls label-switched-path to-r2 to 10.0.23.2
    user@R1# set protocols mpls label-switched-path to-r2 standby
    user@R1# set protocols mpls label-switched-path to-r2 link-protection
  9. Configure routing policies to enable reachability to the loopback interfaces of the other devices in the IP fabric:
    [edit]
    user@R1# set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.16.0/28 orlonger
    user@R1# set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.22.0/28 orlonger
    user@R1# set policy-options policy-statement receive-loopbacks term loopbacks from route-filter 10.0.23.0/28 orlonger
    user@R1# set policy-options policy-statement receive-loopbacks term loopbacks then accept
    user@R1# set policy-options policy-statement advertise-loopbacks term loopback from protocol direct
    user@R1# set policy-options policy-statement advertise-loopbacks term loopback from route-filter 10.0.23.0/28 orlonger
    user@R1# set policy-options policy-statement advertise-loopbacks term loopback then next-hop self
    user@R1# set policy-options policy-statement advertise-loopbacks term loopback then accept
  10. Configure per-flow load balancing:
    [edit]
    user@R1# set policy-options policy-statement pfe-lb then load-balance per-packet
    user@R1# set routing-options forwarding-table export pfe-lb
    user@R1# set forwarding-options hash-key family inet layer-3
    user@R1# set forwarding-options hash-key family inet layer-4
  11. Configure LLDP:
    [edit]
    user@R1# set protocols lldp interface all
    user@R1# set protocols lldp interface fxp0 disable
  12. Configure SNMP and event options:
    [edit]
    user@R1# set snmp community public authorization read-write
    user@R1# set snmp trap-group networkdirector_trap_group version v2
    user@R1# set snmp trap-group networkdirector_trap_group destination-port 10162
    user@R1# set snmp trap-group networkdirector_trap_group categories authentication
    user@R1# set snmp trap-group networkdirector_trap_group categories link
    user@R1# set snmp trap-group networkdirector_trap_group categories services
    user@R1# set snmp trap-group networkdirector_trap_group targets 10.94.63.253
    user@R1# set snmp trap-group space targets 10.94.63.252
    user@R1# set event-options policy target_add_test events snmpd_trap_target_add_notice
    user@R1# set event-options policy target_add_test then raise-trap
  13. Configure a VRF instance to provide connectivity to the fabric devices:
    [edit]
    user@R1# set routing-instances RED-vpn instance-type vrf
  14. Add the fabric-facing interfaces to the routing instance:
    [edit]
    user@R1# set routing-instances RED-vpn interface xe-1/0/0.0
    user@R1# set routing-instances RED-vpn interface xe-1/0/1.0
    user@R1# set routing-instances RED-vpn interface xe-1/0/2.0
    user@R1# set routing-instances RED-vpn interface xe-1/0/3.0
  15. Configure a route distinguisher and VRF target:
    [edit]
    user@R1# set routing-instances RED-vpn route-distinguisher 65001:1
    user@R1# set routing-instances RED-vpn vrf-target target:65001:1
  16. Within the routing instance, configure IPv4 and IPv6 EBGP sessions with each fabric device:
    [edit]
    user@R1# set routing-instances RED-vpn protocols bgp import receive-loopbacks
    user@R1# set routing-instances RED-vpn protocols bgp export advertise-loopbacks
    user@R1# set routing-instances RED-vpn protocols bgp bfd-liveness-detection minimum-interval 250
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC local-as 420006501
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC multipath multiple-as
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.1 description Fabric-sw01
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.1 peer-as 420005501
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.9 description Fabric-sw02
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.9 peer-as 420005502
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.17 description Fabric-sw03
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.17 peer-as 420005503
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.25 description Fabric-sw04
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC neighbor 192.168.14.25 peer-as 420005504
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 local-as 420006501
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 multipath multiple-as
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:51::1 peer-as 420005501
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:53::1 peer-as 420005502
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:55::1 peer-as 420005503
    user@R1# set routing-instances RED-vpn protocols bgp group FABRIC-IPV6 neighbor 2001:db8:2001:57::1 peer-as 420005504

Configuring Additional Features for the SaaS Solution

This section describes how to configure additional elements for the SaaS solution. These features are optional, and the sample configurations below are intended as a general guide, to be customized as necessary to fit your environment.

Configuring SNMPv3

CLI Quick Configuration

This solution uses SNMP version 3 (SNMPv3), which supports authentication and encryption. SNMPv3 uses the user-based security model (USM) for message security and the view-based access control model (VACM) for access control. USM specifies authentication and encryption. VACM specifies access-control rules.

To quickly configure SNMPv3, enter the following representative configuration statements on each device:

Note

For more information on SNMPv3, see SNMPv3 Overview.

Configuring BMPv3

CLI Quick Configuration

The BGP Monitoring Protocol (BMP) allows a Junos device to send BGP route information to a monitoring application on a separate device. This solution uses BMP version 3 (BMPv3).

To quickly configure BMPv3, enter the following representative configuration statements on each device:

Note

For more information on BMPv3, see Configuring BGP Monitoring Protocol Version 3.

Configuring Device (Routing Engine) Protection

CLI Quick Configuration

To protect a Junos device from unwanted traffic and attacks, you can apply an input filter to the loopback interface. This filter can also be a useful way to count inbound (wanted or unwanted) traffic.

To quickly configure a firewall filter for the loopback interface, enter the following representative configuration statements on each device:

Note

For more information on using firewall filters to protect a device, see Overview of Firewall Filters and Applying Firewall Filters to Interfaces.

Configuring Remote Port Mirroring

CLI Quick Configuration

Port mirroring copies packets entering or exiting a port, or entering a VLAN, and sends the copies to either a local interface for local monitoring or to a remote monitoring station. Use port mirroring to send traffic to applications that analyze traffic for purposes such as monitoring compliance, enforcing policies, detecting intrusions, monitoring and predicting traffic patterns, correlating events, and so on.

There are two port mirroring instance types:

  • Analyzer instance—useful for mirroring all traffic transiting an interface or VLAN.

  • Port-mirroring instance—useful for controlling which types of traffic should be mirrored.

To quickly configure remote port mirroring, enter the following representative configuration statements on each device:

Analyzer Instance

This option copies all inbound traffic arriving at interface xe-0/0/47 and mirrors it through a GRE-encapsulated tunnel to a remote analyzer at 10.100.10.1.

Port-Mirroring Instance

This option copies only inbound traffic arriving at interface xe-0/0/47 from a host at 10.1.1.1, and mirrors it through a GRE-encapsulated tunnel to a remote analyzer at 10.100.10.1.

Note

For more information on using remote port mirroring, see Understanding Port Mirroring and Configuring Port Mirroring for Remote Analysis.

Configuring Storm Control

CLI Quick Configuration

Storm control helps to prevent network outages caused by broadcast storms. Storm control enables a device to monitor traffic levels and take a specified action when a specified traffic level (the storm control level) is exceeded. You can configure devices to drop broadcast and unknown unicast packets, shut down interfaces, or temporarily disable interfaces when the storm control level is exceeded.

To quickly configure storm control, enter the following representative configuration statements on each device:

Note

For more information on using storm control, see Understanding Storm Control and Configuring Storm Control to Prevent Network Outages.

Configuring Class of Service

CLI Quick Configuration

Junos OS class of service (CoS) enables you to divide traffic into classes and set various levels of throughput and packet loss when congestion occurs.

To quickly configure CoS, enter the following representative configuration statements on each device:

Note

For more information on using class of service, see Traffic Management Feature Guide for QFX Series.

Verification

Confirm that the SaaS IP fabric configuration is working properly.

Leaf: Verifying Interfaces

Purpose

Verify the state of the spine-facing interfaces.

Action

Verify that the spine-facing interfaces (et-0/0/48 to et-0/0/51) are up:

user@Leaf-0> show interfaces terse et-*

Meaning

The spine-facing interfaces are functioning normally.

Leaf: Verifying IPv4 EBGP Sessions

Purpose

Verify the state of IPv4 EBGP sessions between the leaf and spine devices.

Action

Verify that IPv4 EBGP sessions are established:

user@Leaf-0> show bgp summary | match 192.168

Meaning

The EBGP sessions are established and functioning correctly.

Leaf: Verifying IPv6 EBGP Sessions

Purpose

Verify the state of IPv6 EBGP sessions between the leaf and spine devices.

Action

Verify that IPv6 EBGP sessions are established:

user@Leaf-0> show bgp summary | find 2001

Meaning

The EBGP sessions are established and functioning correctly.

Leaf: Verifying BFD Sessions

Purpose

Verify the state of BFD for the IPv4 BGP sessions between the leaf and spine devices.

Action

Verify that BFD sessions are up:

user@Leaf-0> show bfd session | match 192.168

Meaning

The BFD sessions are established and functioning correctly.

Leaf: Verifying Server Access - Anycast

Purpose

Verify the configuration of the anycast server access option.

Action

  1. Verify that the server-facing interfaces are up:
    user@Leaf-0> show interfaces terse | match 172.16
    user@Leaf-0> show interfaces terse xe-0/0/46
    user@Leaf-0> show interfaces terse xe-0/0/47
  2. Verify that EBGP sessions are established:
    user@Leaf-0> show bgp summary | match 172.16
    user@Leaf-0> show bgp summary | match 2001:9

Meaning

The server-facing interfaces are up, the EBGP sessions are established, and anycast server access is functioning correctly.

Leaf: Verifying Server Access - Unicast

Purpose

Verify the configuration of the unicast server access option.

Action

  1. Verify that the server-facing interfaces are up:
    user@Leaf-0> show interfaces terse | match xe-*
  2. Verify that the IRB interface is up:
    user@Leaf-0> show interfaces terse | match irb.1
  3. Verify the VLAN configuration:
    user@Leaf-0> show vlans detail SERVER

Meaning

The server-facing and IRB interfaces are up, the VLAN is active, and unicast server access is functioning correctly.

Leaf: Verifying Server Access - Hybrid

Purpose

Verify the configuration of the hybrid server access option.

Action

  1. Verify that the server-facing interfaces are up:
    user@Leaf-0> show interfaces terse | match xe-*
  2. Verify that the IRB interface is up:
    user@Leaf-0> show interfaces terse | match irb
  3. Verify the VLAN configuration:
    user@Leaf-0> show vlans detail hybrid
  4. Verify that EBGP sessions are established:
    user@Leaf-0> show bgp summary | match 172.16

Meaning

The server-facing and IRB interfaces are up, the VLAN is active, the EBGP sessions are established, and hybrid server access is functioning correctly.

Leaf: Verifying Server Load Balancing Using IPv4 Anycast - Packet Forwarding Engine Load Balancing

Purpose

For the server load balancing scenario, verify reachability and load sharing to the servers using the anycast IPv4 address.

Action

Verify that Packet Forwarding Engine load balancing is enabled to the anycast IPv4 address:

user@Leaf-0> show route forwarding-table destination 10.1.1.1

Meaning

The forwarding table entry for route 10.1.1.1/32 has the type ulst (unilist, meaning the device can use multiple next hops), followed by the list of eligible next hops.

Leaf: Verifying Server Load Balancing Using IPv4 Anycast - BGP Load Balancing

Purpose

Verify the state of IPv4 BGP load balancing between the leaf device and load-sharing servers.

Action

Verify that IPv4 EBGP sessions are established:

user@Leaf-0> show bgp summary | match 6700

Meaning

The EBGP sessions are established and functioning correctly.

Leaf: Verifying Server Load Balancing Using IPv4 Anycast - BFD Sessions

Purpose

Verify the state of BFD for the IPv4 BGP connections between the leaf device and load-sharing servers.

Action

Verify that BFD sessions are up:

user@Leaf-0> show bfd session | match 172.16.1

Meaning

The BFD sessions are established and functioning correctly.

Leaf: Verifying IPv6 Anycast Reachability

Purpose

Verify reachability to the servers using the anycast IPv6 address.

Action

Verify that the leaf device has an entry for the 2001:db8:2000::/64 route:

user@Leaf-0> show route 2001:db8:2000::/64

Meaning

The leaf device has reachability to the servers using the anycast IPv6 address.

Leaf: Verifying Server Load Balancing Using IPv6 Anycast - Packet Forwarding Engine Load Balancing

Purpose

Verify load sharing to the servers using the anycast IPv6 address.

Action

Verify that Packet Forwarding Engine load balancing is enabled to the anycast IPv6 address:

user@Leaf-0> show route forwarding-table destination 2001:db8:2000::/64

Meaning

The forwarding table entry for route 2001:db8:2000::/64 has the type ulst (unilist, meaning the device can use multiple next hops), followed by the list of eligible next hops.

Spine: Verifying Interfaces

Purpose

Verify the state of the leaf-facing and fabric-facing interfaces.

Action

Verify that the leaf-facing interfaces (et-0/0/0 to et-0/0/3) and fabric-facing interfaces (et-0/2/0 to et-0/2/3) are up:

user@Spine-0> show interfaces terse | match et-*

Meaning

The leaf-facing and fabric-facing interfaces are functioning normally.

Spine: Verifying IPv4 EBGP Sessions

Purpose

Verify the state of leaf-facing and fabric-facing IPv4 EBGP sessions.

Action

Verify that IPv4 EBGP sessions are established:

user@Spine-0> show bgp summary

Meaning

The EBGP sessions are established and functioning correctly.

Spine: Verifying IPv6 EBGP Sessions

Purpose

Verify the state of leaf-facing and fabric-facing IPv6 EBGP sessions.

Action

Verify that IPv6 EBGP sessions are established:

user@Spine-0> show bgp summary | find 2001

Meaning

The EBGP sessions are established and functioning correctly.

Spine: Verifying BFD Sessions

Purpose

Verify the state of BFD for the BGP sessions between the leaf and fabric devices.

Action

Verify that the BFD sessions are up:

user@Spine-0> show bfd session

Meaning

The BFD sessions are established and functioning correctly.

Spine: Verifying IPv6 Anycast Reachability

Purpose

Verify reachability to the servers using the anycast IPv6 address.

Action

Verify that the spine device has an entry for the 2001:db8:2000::/64 route:

user@Spine-0> show route 2001:db8:2000::/64

Meaning

The spine device has reachability to the servers using the anycast IPv6 address.

Spine: Verifying Server Load Balancing Using IPv6 Anycast - Packet Forwarding Engine Load Balancing

Purpose

Verify load sharing to the servers using the anycast IPv6 address.

Action

Verify that Packet Forwarding Engine load balancing is enabled to the anycast IPv6 address:

user@Spine-0> show route forwarding-table destination 2001:db8:2000::/64

Meaning

The forwarding table entry for route 2001:db8:2000::/64 has the type ulst (unilist, meaning the device can use multiple next hops), followed by the list of eligible next hops.

Fabric: Verifying Interfaces

Purpose

Verify the state of the core-facing and spine-facing interfaces.

Action

Verify that the core-facing interfaces (xe-0/0/0:0 and xe-0/0/0:3) and spine-facing interfaces (et-0/0/20 to et-0/0/23) are up:

user@Fabric-0> run show interfaces terse

Meaning

The core-facing and spine-facing interfaces are functioning normally.

Fabric: Verifying IPv4 EBGP Sessions

Purpose

Verify the state of IPv4 EBGP sessions with the spine devices and core routers.

Action

Verify that IPv4 EBGP sessions are established with the spine devices (192.168.13.x) and core routers (192.168.14.x):

user@Fabric-0> run show bgp summary

Meaning

The EBGP sessions are established and functioning correctly.

Fabric: Verifying IPv6 EBGP Sessions

Purpose

Verify the state of IPv6 EBGP sessions with the spine devices and core routers.

Action

Verify that IPv6 EBGP sessions are established with the fabric devices (2001:db8:2001:2x and 2001:db8:2001:33) and core routers (2001:db8:2001:5x):

user@Fabric-0> show bgp summary | find 2001

Meaning

The EBGP sessions are established and functioning correctly.

Fabric: Verifying BFD Sessions

Purpose

Verify the state of BFD for the BGP sessions with the spine devices and core routers.

Action

Verify that BFD sessions are up:

user@Fabric-0> run show bfd session

Meaning

The BFD sessions are established and functioning correctly.

Fabric: Verifying IPv4 Anycast Reachability

Purpose

Verify reachability to the servers using the anycast IPv4 address.

Action

Verify that the fabric device has an entry for the 10.1.1.1 route:

user@Fabric-0> run show route 10.1.1.1

Meaning

The fabric device has reachability to the servers using the anycast IPv4 address.

Fabric: Verifying Server Load Balancing Using IPv4 Anycast - Packet Forwarding Engine Load Balancing

Purpose

Verify load sharing to the servers using the anycast IPv4 address.

Action

Verify that Packet Forwarding Engine load balancing is enabled to the anycast IPv4 address:

user@Fabric-0> run show route forwarding-table destination 10.1.1.1

Meaning

The forwarding table entry for route 10.1.1.0/24 has the type ulst (unilist, meaning the device can use multiple next hops), followed by the list of eligible next hops.

Fabric: Verifying IPv6 Anycast Reachability

Purpose

Verify reachability to the servers using the anycast IPv6 address.

Action

Verify that the leaf device has an entry for the 2001:db8:2000::/64 route:

user@Fabric-0> show route 2001:db8:2000::/64

Meaning

The leaf device has reachability to the servers using the anycast IPv6 address.

Core: Verifying Interfaces

Purpose

Verify the state of the fabric-facing, Internet-facing, and core-facing interfaces.

Action

Verify that the fabric-facing interfaces (xe-1/0/0 to xe-1/0/3), Internet-facing interfaces (xe-1/3/1 and xe-1/3/2), and the interface to the neighboring core router (xe-1/3/3) are up:

user@R1> show interfaces terse xe-*

Meaning

The interfaces are up and functioning normally.

Core: Verifying IPv4 BGP Sessions

Purpose

Verify the state of IPv4 BGP sessions with the neighboring core router and fabric devices.

Action

Verify that IPv4 BGP sessions are established with the neighboring core router (10.0.23.2) and fabric devices (192.168.14.x):

user@R1> show bgp summary

Meaning

The BGP sessions are established and functioning correctly.

Core: Verifying IPv6 EBGP Sessions

Purpose

Verify the state of IPv6 EBGP sessions with the fabric device.

Action

Verify that IPv6 EBGP sessions in routing-instace RED-vpn on the core router are established with the fabric devices:

user@R1> show bgp summary | find 2001

Meaning

The EBGP sessions are established and functioning correctly.

Core: Verifying BFD Sessions

Purpose

Verify the state of BFD for the connections with the neighboring core router and fabric devices.

Action

Verify that BFD sessions are up:

user@R1> show bfd session

Meaning

The BFD sessions are established and functioning correctly.

Core: Verifying IPv4 Anycast Reachability

Purpose

Verify reachability to the servers using the anycast IPv4 address.

Action

Verify that the core router has an entry for the 10.1.1.1 route:

user@R1> show route 10.1.1.1

Meaning

The core router has reachability to the servers using the anycast IPv4 address.

Core: Verifying Server Load Balancing Using IPv4 Anycast - Packet Forwarding Engine Load Balancing

Purpose

Verify load sharing to the servers using the anycast IPv4 address.

Action

Verify that Packet Forwarding Engine load balancing is enabled to the anycast IPv4 address:

user@R1> show route forwarding-table destination 10.1.1.1

Meaning

The forwarding table entry for route 10.1.1.0/24 has the type ulst (unilist, meaning the device can use multiple next hops), followed by the list of eligible next hops.

Core: Verifying IPv6 Anycast Reachability

Purpose

Verify reachability to the servers using the anycast IPv6 address.

Action

Verify that the core router has an entry for the 2001:db8:2000::/64 route:

user@R1> show route 2001:db8:2000::/64

Meaning

The core router has reachability to the servers using the anycast IPv6 address.

Additional Features: Verifying SNMP

Purpose

Verify that SNMP and system logging (syslog) are operating correctly.

Action

  1. Verify that SNMP is configured and operating as expected:

    user@Leaf-0> show snmp v3
  2. Login to the syslog server (in this example, a Linux server at 10.94.63.191) and verify that logs are being received:

    user@ubuntu:/var/log/JUNOS$ tail -f syslog.log

Meaning

SNMP and syslog are operating as expected.

Additional Features: Verifying BMP

Purpose

Verify that BMP is operating correctly.

Action

  1. Verify that BMP is configured and operating as expected:

    user@Leaf-0> show bgp bmp
  2. Log in to the BMP server (in this example, a Linux server at 10.94.63.193) and verify that BMP messages are being received:

    user@ubuntu--bmp:~$ tail -f /var/lib/docker/aufs/mnt/abc123/root/ryu_bmp.log

Meaning

BMP is operating as expected.

Additional Features: Verifying Remote Port Mirroring

Purpose

Verify that remote port mirroring is operating correctly.

Action

  1. If using the analyzer instance option, verify that the analyzer configuration is operating as expected:

    user@Leaf-0> show forwarding-options analyzer RemPortMon-GRE
  2. If using the port-mirroring instance option, verify that the port mirroring instance configuration is operating as expected:

    user@Leaf-0> show forwarding-options port-mirroring detail

Meaning

Remote port monitoring is operating as expected.

Additional Features: Verifying Class of Service

Purpose

Verify that CoS is operating correctly.

Action

  1. Verify the forwarding class set configuration:

    user@Leaf-0> show class-of-service forwarding-class-set
  2. Verify the scheduler map configuration:

    user@Leaf-0> show class-of-service scheduler-map sm-anycast
  3. Verify that traffic is being classified and queued appropriately:

    user@Leaf-0> show interfaces queue et-0/0/48

Meaning

Traffic matching the App-1, App-2, and App-3 multifield classifiers is correctly being identified, queued, and scheduled.