Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuration Walkthrough

This walkthrough summarizes the steps required to configure the Data Center Interconnectivity using Juniper Apstra.

As discussed in Use Case and Reference Architecture, this JVD will only cover three DCI use cases using different fabric design and Juniper Devices. This JVD will also include Media Access Control security (MACSEC) between DCI, however the configuration is provisioned as configlet as Apstra 5.0 is unable to support.

Prerequisite

Provision the 3-stage Data Center, Collapsed Fabric Data Center and 5-stage EVPN VXLAN Data Center as has been discussed in the respective data center design JVD.

Over-the-Top Design (with MACSEC)

For the DCI OTT design, two 3-stage data center are interconnected using Layer 2 switches (QFX10002-36Q) or any ISP switches that support Layer 2 switching cross connect configuration as shown in below diagram. For QFX10002-36Q switches, licenses are needed for MPLS and L2-circuit. For more information on Layer 2 switching cross connect refer Juniper guide on Layer 2 circuit cross-connect (CCC) configuration. This JVD will briefly cover the configuration on these two switches as this can vary for different DCI implementation and hence is outside the scope of this JVD.

For the sake of clarity, the two data centers are referred to as DC1 and DC2 as shown below in Figure 1.

Figure 1: OTT Design Connecting Two DCs A diagram of a computer network AI-generated content may be incorrect.

Ensure to physically cable the border leaf switches in both data centers as shown in Figure 1 to the Interconnect (ISP) switches, before proceeding to configure DCI in Apstra. To provision Data Center Interconnectivity using Apstra, here are the steps.

  1. Logon to Apstra UI and navigate to the blueprint of the first 3-stage data center (hereinafter referred to as DC1). Configure the links to the Interconnect ISP switches as shown below for both border leaves. Ensure the cabling is also updated reflecting the interface connecting the ISP switches on each of the border leaves. For more information on creating an external generic server refer the Apstra guide for adding links to existing Blueprint.
    Figure 3: Border Leaf 2 Connectivity with ISP Switch 2 A screenshot of a computer AI-generated content may be incorrect.
  2. Create the routing policy to allow the loopback IP of the border leaf switches of remote data center, in this case DC2. Navigate to Blueprint > Staged > Policies > Routing Policies and create or modify the relevant policy to permit the import routes of Looback IPs of data center (DC2) border leaf switches.
    Figure 4: Add Import Routes for DC2 Border Leaf Switches A screenshot of a computer AI-generated content may be incorrect.
  3. Next create Connectivity Templates to connect border leaf switches in current blueprint (DC1) to remote data center blueprint (DC2). This step creates the underlay connectivity between the two data center which is VLAN tagged. The routing policy defined in previous step is assigned for routes import. An underlay eBGP is also created between the Border leaf switch1 of both data centers. Similarly configure the underlay connectivity, eBGP between the Border Leaf switch2 of both data centers. There should be two connectivity templates created for each border leaf connectivity. See Figure 1 to review the connectivity.
    Figure 5: Border Leaf Switch1 Connectivity Template Screens screenshot of a chat box AI-generated content may be incorrect.
    Figure 6: Border Leaf Switch2 Connectivity Template Screens screenshot of a computer AI-generated content may be incorrect.

    Once the Connectivity templates are created, assign them to the border leaf switches as shown below. For more information on Apstra connectivity templates, refer Juniper Apstra Guide.

    Figure 7: Assign Connectivity Template to Border Leaf Switch1 A screenshot of a computer AI-generated content may be incorrect.
    Figure 8: Assign Connectivity Template to Border Leaf Switch 2 A screenshot of a computer AI-generated content may be incorrect.
  4. After assigning the connectivity template, navigate to Blueprint > Staged > Virtual > Routing zone and click on the default routing zone to allocate IPV4 and IPV6 IP addresses to create the border leaf connectivity between the two data centers.
  5. To create the overlay connectivity, Navigate to Blueprint > Staged > DCI and select the Over the Top or External Gateway and enter the details of the remote border leaf switches of the remote data center. This step should be carried out for both border leaf switches as both use different Interconnect switch to connect to the remote data center border leaves.
    Figure 10: Create Over the Top or External Gateway between DC1 Border Leaf1 and DC2 Border Leaf1 A screenshot of a computer AI-generated content may be incorrect.
    Figure 11: Create Over the Top or External Gateway between DC1 Border Leaf2 and DC2 Border Leaf2 A screenshot of a computer AI-generated content may be incorrect.
  6. Then ensure the ASN, Loopback IPs on the ISP switches reflects that of remote data center’s border leaf switches, i.e. DC2’s border leaf switch ASN and loopback IPs as shown below. Navigate to Blueprint > Staged > Physical > Topology then click on generic server representation of ISP switches then on next screen right hand side navigate to properties as shown below and update the ASN and loopback IP. Repeat the same step for generic server ISP switch 2 as well.
    Figure 12: ASN and Loopback Update of Remote Data Center for eBGP A screenshot of a computer AI-generated content may be incorrect.
  7. Navigate to DC1’s Blueprint > Uncommitted and commit all the changes. Note that the connectivity will not be up at this point as the ISP switches and the remote data center (DC2 in this instance) blueprint are not setup with DCI connectivity. This will be discussed in next step.
  8. Repeat all of the above steps for the remote data center, for instance DC2. And then proceed to create the configuration on the ISP Switches, see Configuring Interconnect ISP switches.
    Note:

    For the purposes of this lab, the configuration on the ISP switches were applied manually.

  9. MACSEC configuration was also applied to border leaf switches to encrypt traffic between the DC1 and DC2 data centers. Since Apstra doesn’t support MACSEC, it was applied using configlet, refer to section 1 for more information.
  10. Once configuration on ISP Switches and the remote data center (DC2) is committed, the connectivity should be up and Apstra should show no Anomalies related to the DCI connectivity. If Apstra shows anomalies for BGP, Interface etc, analyze and troubleshoot these.
Figure 13: DC1 Border Leaf Switch1 BGP Established with DC2 Border Leaf Switch1 A screenshot of a computer AI-generated content may be incorrect.
Figure 14: DC1 Border Leaf Switch2 BGP Established with DC2 Border Leaf Switch2 A screenshot of a computer AI-generated content may be incorrect.
Figure 15: DC2 Border Leaf Switch1 BGP Established with DC1 Border Leaf Switch1 A screenshot of a computer AI-generated content may be incorrect.
Figure 16: DC2 Border Leaf Switch2 BGP Established with DC1 Border Leaf Switch2 A screenshot of a computer AI-generated content may be incorrect.

Configuring Interconnect ISP Switches

For the DCI connectivity, the choice of connectivity between data centers depends on factors such as latency, convergence times during link or node failures, transport type (Layer 2/Layer 3) and hardware used to provide interconnectivity.

Note:

This JVD does not recommend the type of connectivity between two data centers. For simplicity, interface-level switching (CCC) is used for implementation which is covered as part of this JVDE so as to provide information about Lab setup. However for production environment this could vary.

For the purposes of this lab, two QFX10002-36Q are used and the border leaves connect using 10G links. MPLS and L2-circuit are used to configure the Layer 2 cross connect. The configuration snippets shows necessary configuration applied on one of the ISP switches to provide connectivity. Licenses for MPLS and L2-circuit were also applied. For more information on interface-switch cross connect refer the Layer2 cross-connect guide.

  1. The interfaces are created with circuit cross connect (CCC) encapsulation as ethernet-ccc (configured the whole physical interface) [set interfaces <interface_name> encapsulation ethernet-ccc]. For the circuit to work logical interface (unit 0) should also be configured with family ccc [edit interfaces <interface_name> unit 0 family ccc] for interfaces connecting to the border leaves on both data centers.
  2. Besides that circuit cross connect switch [edit protocols connections] is configured using the interfaces set up as CCC in step 1.
  3. Lastly for Layer 2 switching cross-connects to work, MPLS protocol should be enabled.

EVPN-VXLAN Type 2 Seamless Stitching (Layer 2 only with MACSEC) Design

For the Type 2 seamless stitching DCI design, only a subset of VLAN/VNI stretching between sites is configured. In this design MACSEC is also used for encrypting traffic between the two data centers. For this design, a 3-stage data center and a collapsed fabric data center are interconnected using layer 2 switches (QFX10002-36Q) as is described in Over-the-Top (OTT) (with MACSEC).

Note:

Since MACSEC is used for encrypting traffic and a valid MACSEC license is needed to allow MACSEC traffic on both data center Border leaves. Also the MACSEC config is applied using configlets from Apstra. QFX5700 and QFX5120-48YM support MACSEC and hence been used for this DCI design.

Figure 17: Type 2 Seamless Stitching Connectivity between Data Centers A diagram of a switch AI-generated content may be incorrect.

For the sake of clarity, the two data centers are referred to as DC1 and DC3 as is shown in Figure 17. Ensure both data centers are physically cabled to the Interconnect ISP switches.

The configuration steps in this case are similar to the OTT Design. For this both 3-stage (DC1) and Collapsed Fabric (DC3) blueprint should be up and running before proceeding. The steps for Type 2 DCI and for setting up MACSEC is as follows:

  1. Navigate to the blueprint of the first 3-stage Data Center (hereinafter referred to as DC1). Configure the links to the Interconnect ISP switches as shown below for both border leaves. Ensure the Link is also updated. For more information on creating an external generic server refer the Apstra guide for adding links to existing Blueprint. Below is an example of Border Leaf switch1 showing connectivity. Repeat same steps for Border Leaf switch2.
    Figure 18: Border Leaf Switch 1 Connectivity A screenshot of a computer AI-generated content may be incorrect.
  2. Create Routing policy and allow the Loopback IP of the border leaf switches of remote data center, in this case DC3. Navigate to Blueprint > Staged > Policies > Routing Policies and create or modify relevant Policy and permit the import routes of Looback IPs of remote data center (DC3) border leaf switches.
    Figure 19: Routing Policy Allowing Routes Import from DC3 A screenshot of a computer AI-generated content may be incorrect.
  3. Create Connectivity Templates to connect border leaf switches in the current blueprint (DC1) to remote data center blueprint (DC3). This step creates the underlay connectivity between the two data center which is VLAN tagged. The routing policy defined in previous step is assigned for routes import. An underlay eBGP is also created between the border leaf switch1 of both data centers. Similarly, configure the underlay connectivity, eBGP between the border Leaf switch2 of both data centers. There should be two connectivity templates created for each border leaf switch.
    Figure 20: Connectivity Template Created for Border Leaf1 Screens screenshot of a computer AI-generated content may be incorrect.
    Figure 21: Connectivity Template Created for Border Leaf Switch2 Screens screenshot of a computer screen AI-generated content may be incorrect.
  4. Once the Connectivity templates are created, assign them to the border leaf switches as shown below. For more information on Apstra connectivity templates, refer Juniper Apstra Guide.
    Figure 22: Border Leaf Switch1 Assigned to Connectivity Template A screenshot of a computer AI-generated content may be incorrect.
    Figure 23: Border Leaf switch2 Assigned to Connectivity Template A screenshot of a computer AI-generated content may be incorrect.
  5. To create the overlay connectivity, Navigate to Blueprint > Staged > DCI and select the Integrated Interconnect to create the Interconnect domain. This ensures the Interconnect ESI is different in both data centers (DC1 and DC3).
    Figure 24: Create Interconnect Domain A screen shot of a computer AI-generated content may be incorrect.

    Then click on the “Local and Remote Gateway” to fill in the remote gateway i.e. DC3 border leaf switch information such as ASN, loopback etc as shown in Figure 25 and Figure 26.

    Figure 25: Local and Remote Gateway Local and Remote Gateway
    Figure 26: Create Remote and Local Gateway for both Border Leaf Switches Create Remote and Local Gateway for both Border Leaf Switches
  6. Before proceeding to associat the VXLAN/VNI to stretch across the DCI , navigate to Blueprint > Staged > Virtual Network and create Virtual Network. Refer Apstra guide for creating Virtual Network. The same Virtual Networks should be created in the remote data center blueprint for seamless stretching to work.

    Then navigate back to Blueprint > Staged > DCI > Integrated Interconnect and select connection type and select the virtual networks listed and enable Layer 2 (EVPN Type 2). If necessary, translation VNI can also be configured. If transaltion VNI is configured then the switch translates the VNI while it is forwarding traffic to a commonVNI configured across the data center. The border leaf switch translates the VNI only if the translated VNI is included in the interconnected VNI list, which Apstra includes under [edit routing-instance evpn-1 protocols evpn interconnect interconnected-vni-list] as shown below in rendered configuration of border leaf switch1 .
    Figure 27: Border Leaf1 Switch Apstra Rendered Config Snippet A screenshot of a computer code AI-generated content may be incorrect.
    Figure 28: Selecting Virtual Network to Stretch and Apply Translation VNI Selecting Virtual Network to Stretch and Apply Translation VNI
  7. At this point the data center interconnectivity configuration should be ready. And if committed the connectivity between the two data centers should be established and the traffic between the data centers will be unencryted. For this JVD design, MACSEC is setup between the two DCs which has been configured using configlet as Apstra does not natively support MACSEC, refer to section 1 for applying MACSEC using configlet.

  8. Then ensure the ASN, loopback IPs on the ISP switches reflects that of remote data center’s border leaf switches, i.e. DC3’s border leaf switch ASN and loopback IPs as shown below. Navigate to Blueprint > Staged > Physical > Topology then click on border leaf switch1 and as shown in then on next screen right hand side. Navigate to properties as shown below and update the ASN and loopback IP. Repeat the same step for generic server ISP switch 2 as well.
    Figure 29: Configuring ASN, Loopback of Remote Data Center Border leaf switch Configuring ASN, Loopback of Remote Data Center Border leaf switch
  9. Navigate to DC1’s Blueprint > Uncommitted and commit all the changes. Note that the connectivity will not be up as at this point the ISP switches and the remote data center (DC3 in this instance) blueprint are not setup with DCI connectivity. This will be discussed in the next step.
  10. Repeat all of the above steps for the remote data center, for instance DC3. And then proceed to create configuration on the ISP Switches as discussed in section Configuring Interconnect ISP switches.
  11. Once configuration on ISP Switches and the remote data center (DC3) is committed, the connectivity should be up and Apstra should show no anomalies related to the DCI connectivity. If Apstra shows anomalies for BGP, cabling, Interface etc, analyze and troubleshoot these issues.
Figure 30: Apstra showing BGP and Interfaces all Green for DC1 Border Leaf Switch 1 Apstra showing BGP and Interfaces all Green for DC1 Border Leaf Switch 1
Figure 31: Apstra showing BGP and interfaces all Green for DC1 Border Leaf Switch 2 A screenshot of a computer AI-generated content may be incorrect.
Figure 32: BGP and Interfaces all Green on DC3 Leaf Switches A screenshot of a computer AI-generated content may be incorrect.

During validation of the VXLAN Type 2 stitching, it was noticed that Apstra omitted applying the DCI overlay EVPN BGP policy configuration on the collapsed fabric leaf switches to stop advertising overlay routes between collapsed leaf switches. However the same was applied on 3-stage fabric spine switches to stop advertising overlay routes. Below configuration was applied using configlet on collapsed fabric leaf switches on existing EVPN eBGP configuration.

EVPN-VXLAN Type 2 and Type 5 Seamless Stitching Design

This design and configuration is similar to EVPN-VXLAN Type 2 Seamless Stitching (Layer 2 only with MACSEC) design, with the exception that it involves Type 2 and Type 5 stitching design. However while selecting the Virtual Networks for stretching, both Type 2 (Layer 2) and Type 5 (Layer 3) are enabled. When enabling Layer 3 for Virtual Networks to stretch across the data centers, the VRFs on the Layer-3 Policy tab in Apstra must also be enabled and a routing-policy must be associated with the VRF. Refer the Apstra guide for more information.

Figure 33: Selecting Virtual Network to stretch Type 2 and Type 5 A screenshot of a computer AI-generated content may be incorrect.
Figure 34: Configure Layer-3 Policy for VRF Configure Layer-3 Policy for VRF

By enabling the Type 5 route for the VRF, Apstra applies below configuration to stitch the EVPN routes between data centers. The same Interconnect route target should be applied in the remote data center for seamless stitching to work.

Additional Configurations Applied

  1. MACSEC

    For this JVD design, MACSEC is setup between the two DCs which has been configured using configlet as Apstra does not natively support MACSEC, refer more information on setting MACSEC in the Day One Guide.

    Note:

    For some platforms, such as QFX5700, logical interface (IFL) level MACSEC is unsupported. Therefore, QFX5700 (border gateways) are configured with physical interface (IFD).

    Property set for the OTT and Type2 Seamless stitching is imported into the Blueprint using Blueprint > Catalogue > Property set.

    Figure 35: Property Set for MACSEC Configlet A screenshot of a computer AI-generated content may be incorrect.

    Configlet for MACSEC uses property set as shown in Figure 35 . The same configlet is used for both OTT and Type2 Seamless Stitching.

    Figure 36: MACSEC Configlet in Apstra A screenshot of a computer AI-generated content may be incorrect.

    Below is the rendered config applied on both Border leaf switch1 and Border leaf switch2 in both data centers.

  2. EVPN Type 5 routes host specific routes

    For host specific routes in Apstra Fabric setting, enable EVPN Type 5 routes as shown below. This will increase routes depending on the number of hosts in the fabric. The default setting for EVPN Type 5 routes is disabled. For the DCI Type 2 and Type 5 seamless stitching, navigate to Blueprint > Staged > Fabric Settings. If this setting is disabled then the routes shared will be the subnet prefix that is configured on the Virtual Network IP Subnet.

    Figure 37: Fabric Setting to Enable Host Specific IP Routes Fabric Setting to Enable Host Specific IP Routes
  3. BFD for better convergence times during node failures

To improve convergence time during link and node failures BFD was applied to the DCI overlay BGP session. Apstra does not apply BFD for DCI overlay BGP session. Hence the configlet was used to setup BFD to the DCI overlay BGP session. For overlay BFD session, the connectivity template was used to apply BFD.