Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Configuration Walkthrough

This walkthrough summarizes the steps required to configure the 3-Stage Fabric with Juniper Apstra JVD. For more detailed step-by-step configuration information, refer to the Juniper Apstra User Guide. Additional guidance in this walkthrough is provided in the form of Notes.

This walkthrough details the configuration of the baseline design, as used during validation in the Juniper data center validation test lab. The baseline design consists of QFX5220-32CD switches in the spine role, QFX5130-32CD switches in the border leaf role, and QFX5120-48Y switches in the server leaf role. The goal of JVD is to provide options so that any of these switch platforms can be replaced with a validated switch platform for that role, as described in Table 1. In order to keep this walkthrough a manageable length, only the baseline design platforms will be used for the purposes of this document.

Apstra: Configure Apstra Server and Apstra ZTP Server

This document does not cover the installation of Apstra. For more information about installation, refer to the Juniper Apstra User Guide.

The first step is to configuration of the Apstra server. A configuration wizard launches upon connecting to the Apstra server VM for the first time. At this point, passwords for the Apstra server, Apstra UI, and network configuration can be configured.

Apstra: Management of Junos OS Device

There are two methods of adding Juniper devices into Apstra: manually or in bulk using ZTP:

To add devices manually (recommended):

In the Apstra UI navigate to Devices > Agents > Create Offbox Agents.

This requires minimum configuration of root password and management IP to be configured on the devices.

To add devices through ZTP:

From the Apstra ZTP server, to add devices, refer to the Juniper Apstra User Guide for more information on the ZTP of Juniper devices.

For this setup, a root password and management IPs were already configured on all switches prior to adding the devices to Apstra. To add switches to Apstra, first log into the Apstra Web UI, choose a method of device addition as per above, and provide the appropriate username and password preconfigured for those devices.

Note:

Apstra pulls the configuration from Juniper devices called pristine configuration. The Junos configuration ‘groups’ stanza is ignored when importing the pristine configuration, and Apstra will not validate any group configuration listed in the inheritance model, refer the Use Configuration Groups to Quickly Configure Devices. However, it’s best practice to avoid setting loopbacks, interfaces (except management interface), routing-instances (except management-instance). Apstra will set the protocols LLDP and RSTP when device is successfully Acknowledged.

Apstra Web UI: Create Agent Profile

For the purposes of this JVD lab, the root user and password are the same across all devices. Hence, an agent profile is created as below. Note that this also obscures the password, which keeps it secure.

  1. Navigate to Devices > Agent Profiles.
  2. Click Create Agent Profile.
Figure 1: Create Agent Profile in Apstra image

Apstra Web UI: Enter IP Address or IP Address Range for Bulk Discovery of Devices

An IP address range can be provided to bulk-add devices into Apstra.

  1. Navigate to Devices > Agents.
  2. Click Create Offbox Agents.
Figure 2: Create Offbox Agent Graphical user interface, text, application, email Description automatically generated

Apstra Web UI: Add Pristine Configuration and Upgrade Junos OS

From Devices > Managed Devices, add the pristine configuration by collecting from the device or pushing from Apstra. The configuration applied as part of the pristine configuration should be the base configuration or minimal configuration required to reach the devices with the addition of any users, static routes to the management switch, etc. This creates a backup of the base configuration in Apstra and allows devices to be reverted to the pristine configuration in case of any issues.

Figure 3: Add Pristine Configuration A screenshot of a computer Description automatically generated
Note:

If pristine configuration is updated using Apstra as shown in above Figure 5 then ensure to run Revert to Pristine.

Important Note: A maintenance window is required to perform any device upgrade as upgrades can be disruptive.Best practice recommendations for Upgrade:

  • Upgrade devices using the Junos OS CLI as outlined in the Junos OS Software Installation and Upgrade Guide, along with the Junos version release notes, as Apstra currently only performs basic upgrade checks. However, this JVD summarizes Upgrade steps if Apstra is intended to be used for upgrades.
  • In case if a device is added to Blueprint, then set the device to “undeploy” and unassign its serial number from blueprint and commit the changes, which reverts it back to pristine Configuration. Then proceed to upgrade. Once upgrade is complete, add the device back to Blueprint.

Apstra allows the upgrade of devices. However, Apstra performs basic checks and issues the upgrade command. To upgrade the device from Apstra, refer to the following figure.

Figure 4: Upgrade Device from Apstra A screenshot of a computer Description automatically generated

To register a Junos OS image on Apstra, either provide a link to the repository where all OS images are stored or upload the OS image as shown below. In the Apstra UI, navigate to Devices > OS Images and click Register OS Images.

Figure 5: Upload OS Image A screenshot of a computer Description automatically generated
Figure 6: Register OS Image by Uploading or Provide Image URL A screenshot of a computer Description automatically generated

Apstra Fabric Provisioning

Check Discovered Devices and Acknowledge the Devices.

Devices > Managed Devices

Once the offbox agent has been added and the device information has been collected, click the checkbox interface to select all the devices and then click acknowledge. This places the switch under the management of the Apstra server.

Finally, ensure that the pristine configuration is collected once again as Apstra adds the configurations for LLDP and RSTP.

Figure 7: Acknowledge Devices to Manage in Apstra Graphical user interface Description automatically generated with medium confidence

Once a switch is acknowledged, the status icon under the “Acknowledged?” table header changes from a red X to a green checkmark. Verify this change for all switches. If there are no changes, repeat the procedure to acknowledge the switches again.

Figure 8: Devices Managed by Apstra A screenshot of a computer Description automatically generated
Note:

After device is managed by Apstra, all device configuration changes should be performed using Apstra. Do not perform configuration changes on devices outside of Apstra, as Apstra may revert those changes.

Apstra Web UI: Identify and Create Logical Devices, Interface Maps with Device Profiles

In the following steps, we define the 3-stage fabric with the Juniper Apstra baseline architecture and devices. Before provisioning a blueprint, a replica of the topology is created. In the following steps, we define the ERB data center reference architecture and devices:

  • This involves selecting logical devices for spine, leaf, and border leaf switches. Logical devices are abstractions of physical devices that specify common device form factors such as the amount, speed, and roles of ports. Vendor-specific information is not included, which permits building the network definition before selecting vendors and hardware device models. The Apstra software installation includes many predefined logical devices that can be used to create any variation of the logical device.
  • Logical devices are then mapped to device profiles using interface maps. The ports mapped on the interface maps match the device profile and the physical device connections. Again, the Apstra software installation includes many predefined interface maps and device profiles.
  • Finally, the racks and templates are defined using the configured logical devices and device profiles, which are then used to create a blueprint.

The Juniper Apstra User Guide explains the device lifecycle, which must be understood when working with Apstra blueprints and devices.

Note:

The 3-stage design provisioning steps use the Apstra Data Center Reference design.

Navigate to Design > Logical Devices, then review the devices listed based on the number of ports and speed of ports. Select the device that most closely resembles the device that should be added, then clone the logical device.

Note:

System added or default logical devices cannot be changed.

The following table shows the device roles, logical device types, ports, and connections created for the 3-Stage Fabric with Juniper Apstra JVD lab in this document. The Port Groups column depicts the minimum connections required for this lab. This will vary from the actual port groups these switches can provide.

Table 1: Logical Device Port Speeds and Connection for Each Fabric Device
Device Role Port Group Connections1 Port Groups2 Connected To
Spine Superspine/Spine/Leaf/Access/Generic 5 x 100 Gbps (each spine)

2 Border Leaf switches

3 Server Leaf switches

Server Leaf (single) Superspine/Spine/Leaf/Access/Generic

2 X 100 Gbps

5 x 10 Gbps

2 Spine

2 Servers (Generic)

Server Leaf switches (2 ESI leaf switches) Superspine/Spine/Leaf/Access/Generic

4 X 100 Gbps (both leaf switches)

5 X 10 Gbps

2 Spine

4 Servers (Generic)

Border Leaf switches Superspine/Spine/Leaf/Access/Generic

6 X 10 Gbps

4 X 100 Gbps (both leaf switches)

6 Servers

2 Spine

1 For port group connections, these can vary depending on the role and devices connected.

2 For port groups, the number of ports can vary depending on connections and speed.

Device Profiles

For all devices covered in this document the device profiles (defined in Apstra found under Devices > Device Profiles) were exactly matched by Apstra while adding devices into Apstra, as covered in Apstra: Management of Junos OS Device. During the validation of the supported devices, there have been instances where device profiles had to be custom-made to suit the line card setup on the device, for instance, QFX5700. For more information on device profiles, refer to the Apstra User Guide for Device Profiles.

Note:

The device profiles covered in this JVD document are not modular chassis-based. For modular chassis-based devices such as QFX5700, the line card Profiles and Chassis Profile are available in Apstra and linked to the device profile. These cannot be edited; however, they can be cloned, and custom profiles can be created for line card, Chassis, and Device profile as shown below in Figure 9 and Figure 10.

Figure 9: QFX5700 Device Profile Linked to Chassis Profile and Linecard Profile A screenshot of a computer Description automatically generated
Figure 10: QFX5700 Device Profile Linked to Linecard Profile A screenshot of a computer Description automatically generated

Spine Logical Device and Corresponding Interface Maps

The spine logical device is based on QFX5220-32CD (Junos OS). For the purposes of this solution, seven 100G links are used to connect to leaf switches. As shown in Figure 11, 12 ports of 100 Gbps are enough for five spine to leaf connections.

Figure 11: Apstra Logical Device Spine Configuration A screenshot of a computer Description automatically generated

The spine logical device ports are mapped to the Device Profiles using the Interface map as shown below. The ports mapped on the interface maps match the device profile and the physical device connections.

Figure 12: Spine Interface Map A screenshot of a computer Description automatically generated

Server Leaf switches Logical Device and Interface Maps

For the purposes of this JVD, there are three QFX5120-48Y server leaf switches. Two of them are ESI-supporting switches, and one of them is a non-ESI LAG switch. All three server leaf switches are connected to each spine using 100 GB interfaces, and the 10 GB interfaces connect to the generic servers.

For a single (non-redundant) leaf switch, no ESI is used, and only LACP (Active) is configured.

Figure 13: Apstra Single Leaf Logical Device A screenshot of a computer Description automatically generated

For ESI (redundant) leaf switches, ESI Lag is used for multi-homing. ESI lag is configured under the Rack in Design > Rack Types.

Figure 14: Apstra Server Leaf switches Logical Device A screenshot of a computer Description automatically generated

The server leaf logical device is mapped to the device profile as below.

Figure 15: Single server Leaf switches Interface Map A screenshot of a computer Description automatically generated
Figure 16: Server Leaf Switches Interface Map for ESI Leaf Switches A screenshot of a computer Description automatically generated
Note:

In this case, the single leaf and ESI server leaf pairs both have the same device profile, but due to differences in how the physical ports on the switches are connected towards the servers and the spine, two different logical devices were designed.

Border Leaf Switches Logical Device and Interface Maps

The border leaf logical device is a representation of the QFX5130-32CD switches used in this design. The physical cabling determines the ports allocated for the interface Maps.

Figure 17: Border Leaf Switches Logical Device Graphical user interface, application, table Description automatically generated
Figure 18: Border Leaf Switches Interface Map A screenshot of a computer Description automatically generated

The rest of the Logical Devices are described below. The interface maps are optional and can be omitted.

Generic Servers Logical Device

Generic servers define the network interface connections from the servers connected to the leaf switches (border and single).

Logical devices for the servers used are already pre-defined within Apstra. A similar generic system can be used for DCI; however, DCI will be covered in a separate JVD Extension document.

External Routers

External routers are connected to the border leaf switches.

Apstra does not manage external routers such as MX Series devices; hence, the MX Series router is classified as an external generic server with the relevant port and speed configuration.

Note:

A generic external system is added to the blueprint after a blueprint is created. An interface map is not needed for generic servers or external routers. The connectivity and features of external routers is beyond the scope of this document.

Apstra Web UI: Racks, Templates, and Blueprints—Create Racks

After defining the logical devices and Interface maps, the next step is to create racks to place the logical devices in rack formation. The default design for this solution is two spines, five server leaf switches, and two border leaf switches. Any rack design can be created and used any number of times, so long as the spine switches have enough ports to support it.

In Apstra, create racks under Design > Rack Types. For this solution, there are four racks. One rack for border leaf switches and three racks for server leaf switches. For more information on creating racks, refer to the Juniper Apstra User Guide.

For this design, the L3 Clos rack structure is as follows:

Server Leaf Switch (Single Leaf)

Figure 19: Single Leaf Rack Without ESI A screenshot of a computer Description automatically generated

Server Leaf Switches (Two Leaf Switches)

Figure 20: Server Leaf Switches with ESI Lag for Multi-Homed A screenshot of a computer Description automatically generated

Border Leaf Switches

Figure 21: Border Leaf Switches Rack A screenshot of a computer Description automatically generated
Note:

Once the blueprint is created and functional, if you need to perform any changes to the racks follow this KB article: https://supportportal.juniper.net/s/article/Juniper-Apstra-How-to-change-Leaf-Access-Switch-of-existing-rack-after-Day2-operations?language=en_US. During validation, the border leaf rack was modified to validate all devices listed in Table 5.

Create Templates

Templates define the structure and the intent of the network. After creating the racks, the spine links need to be connected to each of the racks. In this design, the rack-based templates are used to define the racks to connect as top-of-rack (ToR) switches (or pairs of ToR switches).

As described in the spine logical devices section, there are 100G links assigned to each server leaf and border leaf. The spine logical device is assigned in the template. Since there are no super spines in this design, this is left out of the templates. For more information on templates, refer to the Juniper Apstra User Guide.

Note:

Templates are used as a base for creating the blueprints, which are covered in the next section. Templates are used only once in the lifetime of a blueprint. Hence changing the template doesn’t modify the blueprint.

Figure 23: Rack-Based Template Structure A screenshot of a computer Description automatically generated

Blueprint

Each blueprint represents a data center. Templates are created under the Design > Templates section and will be available in the global catalog for the blueprints. Once the template is defined, it can be used to create a blueprint for the data center.

To create a blueprint, click on Blueprints > Create Blueprint. For more information on creating the blueprint, see the Juniper Apstra User Guide.

Figure 24: Create Blueprint with Dual Stack A screenshot of a computer Description automatically generated

Navigate to Blueprint > Staged. The topology shown can be expanded to view all connections. From here, the blueprint can be provisioned under Staged.

Figure 25: Blueprint Created and Not Provisioned A screenshot of a computer Description automatically generated

As shown above, the blueprint is created but not provisioned. The topology can be inspected for any discrepancies, and if so, then the blueprint can be recreated after fixing the template or the rack. Alternatively, navigate to Staged > Racks to edit the rack by following the steps mentioned in this article.

Apstra Web UI: Provisioning and defining the Network

Once the blueprint is created, it means that the blueprint is ready to be staged. Review the tabs under the blueprint created.

To start provisioning, click on the Staged tab > Physical and then click Build from the right-hand side panel. For more information, refer to the Juniper Apstra User Guide.

Figure 26: Blueprint Assign Resources Under Build Blueprint Assign Resources Under Build

Assign Resources

The first step is assigning IPs created in this Resources section. For this design, below are the resource values used:

  1. Click Staged > Physical > Build > Resources and update as below:
    1. DC1 ASNs—Spines & leaf switches: 64512 - 64999
    2. Loopback IPs—Spines & Leaf switches: 192.168.255.0/24
    3. Link IPs—Spines <> Leaf switches: MUST-FABRIC-Interface-IPs DC1-10.0.1.0/24
Figure 28: Resources Assigned A screenshot of a computer Description automatically generated

Assign Interface Maps to Switches

From the blueprint, navigate to Staged > Physical > Build > Device Profiles.

Next, assign devices to interface maps created in the section Apstra Web UI: Identify and Create Logical Devices, Interface Maps with Device Profiles of this document.

Figure 29: Blueprint Assign Interface Maps in Device Profiles Under Build A screenshot of a computer Description automatically generated
Figure 30: Interface Maps Assigned Interface Maps Assigned
Note:

The assignment of interface maps to generic systems or servers is optional. The status of these parameters will be marked RED and they are also marked as optional.

Assign the System IDs and the Correct Management IPs

From the blueprint, navigate to Staged > Physical > Build > Devices and click on Assigned System IDs. The system IDs are the devices serial numbers.

Figure 31: Blueprint Staged Assign System IDs Under Build A screenshot of a computer Description automatically generated
Note:

The device hostname and the display name (on Apstra) for each node or device is different these can be changed using Apstra.

No system IDs are assigned to generic servers and external routers, as these are not managed by Apstra.

Ensure all the devices are added to Apstra under Devices > Managed Devices before assigning system IDs (serial numbers of the devices).

Review Cabling

Apstra automatically assigns cabling ports on devices that may not be the same as physical cabling. However, the cabling assigned by Apstra can be overridden and changed to depict the actual cabling. This can be achieved by accessing the blueprint, navigating to Staged > Physical > Links, and clicking the Edit Cabling Map button. For more information, refer to the Juniper Apstra User Guide.

Figure 32: Review and Edit Cabling A screenshot of a computer Description automatically generated

It is best practice to review the switch names, including the generic servers, to ensure the naming is consistent. To review and modify the names of the devices, navigate to Staged > Physical > Nodes and click on the name of any of the devices listed to present a screen with the topology and connections to the device along with the panel on the right that shows the device properties, tags, and so on, as shown in Figure 35.

Figure 33: Review Device Links, Properties A screenshot of a computer Description automatically generated

Configlet and Property Sets

Configlets are configuration templates defined in the global catalog under Design > Configlets. Configlets are not managed by Apstra’s intent-based functionality, and these are to be managed manually. For more information on when not to use configlet refer to the Juniper Apstra User Guide. Configlets should not be used to replace reference design configurations. Configlets can be declared as a Jinja template of the configuration snippet, such as Junos configuration JSON style or Junos set-based configuration. For more information on designing a configlet, refer to the Apstra Configlets user guide.

Note:

Improperly configured configlets may not raise warnings or restrictions. It is recommended that configlets are tested and validated on a separate dedicated service to ensure that the configlet performs exactly as intended.Passwords and other secret keys are not encrypted in configlets.

Property sets are data sets that define device properties. They work in conjunction with configlets and analytics probes. Property sets are defined in the global catalog under Design > Property Sets.

Note:

Configuration templates in Freeform blueprints also use property sets, but they're not related to property sets in the design catalog.

Configlets and property sets defined in the global catalogue need to be imported into the required blueprint and if the configlet is modified then the same needs to be reimported into the blueprint, as is the case with property sets too. The following figure shows configlets and property sets located on a blueprint.

Figure 34: Import Configlet into Blueprint A screenshot of a computer Description automatically generated
Figure 35: Import Property Set into Blueprint A screenshot of a computer Description automatically generated

During 3-stage validation, several configlets were applied either as part of the general configuration for setup and management purposes (such as nameservers, NTP, and so on).

Fabric Setting

  • Fabric policy

This option allows for fabric-wide setting of various parameters such as MTU, IPv6 application support, and route options. For this JVD, the following parameters were used: View and modify these settings within the blueprint Staged > Fabric Settings > Fabric Policy within the Apstra UI.

Figure 36: Fabric Policy Settings A screenshot of a computer Description automatically generated
  1. To simulate moderate traffic in datacenter, traffic scale testing was performed refer Table 6 for more details. The scale testing was performed on QFX5120-48Y switches.

    The setting ‘Junos EVPN Next-hop and Interface count maximums’ was also enabled, which allows Apstra to apply the relevant configuration to optimize the maximum number of allowed EVPN overlay next-hops and physical interfaces on leaf switches to an appropriate number for the data center fabric. Along with this the configlet is also used to set a balanced memory allocation for Layer 2 and Layer 3 entries as shown in Figure 37.

For more information on these features, refer to:

For QFX5120 leaf switches configuration:

Figure 37: Configlet on Leaf Switches for Balanced Memory A screenshot of a computer Description automatically generated
  1. For the non-EVO leaf switches, the setting ‘Junos EVPN routing instance mode’ was also enabled, as this is the default setting Apstra applies to all new blueprints from Apstra 4.2. For any blueprint created prior to Apstra 4.2, post-Apstra upgrade of the default switch for non-EVO switches is allowed. However, it is recommended that MAC-VRF normalize the configuration in a mixed setup of Junos OS and Junos EVO OS. A VLAN-aware routing instance ‘evpn-1’ for MAC-VRF is created for only non-EVO Junos devices. This option doesn’t affect EVO devices as Junos EVO can only support MAC-VRF, and the same is already implemented by default.
Note:

If the blueprint is live and running in a production network, it is recommended to perform the above setting changes to MAC-VRF routing instance mode during a maintenance window as it is disruptive and requires a 'reboot’ of non-EVO Junos leaf switches, in this case the QFX5120s.

For QFX5120 Leaf switches configuration:

Anomalies for “Device reboot required” will be raised for non-EVO leaf switches when the MAC-VRF routing instance mode is enabled. To fix these anomalies, reboot the leaf switches affected by the above change from the CLI.

Figure 38: Anomalies Raised by Apstra for QFX5120 Device Reboot After Change to MAC-VRF Anomalies Raised by Apstra for QFX5120 Device Reboot After Change to MAC-VRF

Commit the Configuration

Once the cabling has been verified, the fabric is ready to be committed. This means that the control plane is set up, and all the leaf switches are able to advertise routes via BGP. Review changes and commit by navigating from the blueprint to Blueprint > <Blueprint-name> > Uncommitted.

As of Apstra 4.2, a new feature is to perform a commit check before committing, which is introduced to check for semantic errors or omissions, especially if any configlets are involved.

Note that if there are build errors, those need to be fixed. Otherwise, Apstra will not commit any changes until the errors are resolved.

For more information, refer to the Juniper Apstra User Guide.

Figure 39: Blueprint Committed A screenshot of a computer Description automatically generated

Apstra Fabric Configuration Verification

After reviewing the changes and committing them to the devices, a functional fabric should be created.

Figure 40: Blueprint Nodes Deployed and IPv4 and IPv6 Loopback Assigned by Apstra A screenshot of a computer Description automatically generated

The blueprint for the data center should indicate that no anomalies are present to show that everything is working. To view any anomalies with respect to blueprint deployment, navigate to Blueprint > <Blueprint-name> > Active to view anomalies raised with respect to BGP, cabling, interface down events, routes missing, etc. For more information, refer to the Apstra User Guide.

Figure 41: Blueprint Deployed Shows the Active Tab with No Anomalies A screenshot of a computer Description automatically generated
Figure 42: Datacenter Blueprint Summary A screenshot of a computer Description automatically generated

To verify that the fabric is functional and the changes are configured, log into the console or CLI of each of the spine switches. From the shell of each of the spine switches, enter the following Junos OS CLI command:

The output of this command should resemble the output below. It shows that BGP is established from each spine to each of the seven leaf switches for loopback and fabric link IPs.

If the output of the show bgp summary | no-more command resembles the screenshot above, a bare-bones network fabric is now complete. However, it is not yet ready for production use as the overlay network with VRFs, VLANs, and VNIs still must be applied.

If the output of the show bgp summary | no-more command does not resemble the screenshot, it is essential to remedy any configuration errors before proceeding further.

Configure Overlay Network

Configure Routing Zone (VRF) for Red and Blue Tenants, and Specify a Virtual Network Identifier (VNI)

  1. From Blueprints > Staged -> Virtual > Routing Zones.
  2. Click Create Routing Zone and provide the following information:
    1. VRF Name: blue
    2. VLAN ID: 3
    3. VNI: 20002
    4. Routing Policies: Default immutable
  3. Create another routing zone with the following information:
    1. VRF Name: red
    2. VLAN ID:2
    3. VNI: 20001
    4. Routing Policies: Default immutable
Figure 43: Red and Blue Routing Zone A screenshot of a computer Description automatically generated

Assign EVPN Loopback to Routing

After creating the routing zones, assign the EVPN loopback below to both the Red and Blue routing zones. Navigate to Blueprint > Staged > Routing Zone and assign resources from the right-hand side panel.

Resources Range
MUST-EVPN-Loopbacks-DC1 192.168.11.0/24
Figure 44: Red and Blue Loopback Assigned Red and Blue Loopback Assigned

Create Virtual Networks in Red and Blue Routing Zones

Virtual networks should be associated with routing zones (VRF). Create the virtual networks (VNIs) and associate these Virtual Networks with the routing zone (VRF) created earlier. Optionally, create any additional routing zones and virtual networks for production environments based on individual requirements.

Below are the networks created and assigned to appropriate leaf switches in the fabric. The input fields are as follows:

For Blue Network:

  1. Click Create Virtual Networks.
  2. Set type of network VXLAN.
  3. Provide name: dc1_vn1_blue and dc1_vn2_blue.
  4. Select the Blue security zone for both networks:
  5. Provide VNI:
    1. 12001 for dc1_vn1_blue.
    2. 12002 for dc1_vn2_blue.
  6. IPv4 Connectivity – set enabled.
  7. Create Connectivity Template for: Tagged.
  8. Provide IPv4 Subnet and Virtual IP Gateway:
    1. 10.12.1.0/24, 10.12.1.1 for dc1_vn1_blue
    2. 10.12.2.0/24, 10.12.2.1 for dc1_vn2_blue
  9. Assign to leaf switches.

For Red Network:

  1. Click Create Virtual Networks.
  2. Set type of network VXLAN.
  3. Provide name: dc1_vn1_red and dc1_vn2_red.
  4. Select the Red security zone for both networks:
  5. Provide VNI:
    1. 11001 for dc1_vn1_red
    2. 11002 for dc1_vn2_red
  6. IPv4 Connectivity – set enabled.
  7. Create Connectivity Template for: Tagged.
  8. Provide IPv4 Subnet and Virtual IP Gateway:
    1. 10.11.1.0/24, 10.11.1.1 for dc1_vn1_red
    2. 10.11.2.0/24, 10.11.2.1 for dc1_vn2_red
  9. Assign to leaf switches.
Figure 45: Virtual Networks Created A screenshot of a computer Description automatically generated

IRB Network is created, and a connectivity template is added and assigned to leaf switches as shown in Figure 47. For more information on connectivity templates, see the Juniper Apstra User Guide.

While creating a virtual network, if the create connectivity template is selected above as tagged, Apstra creates a connectivity template, which is generated automatically for the virtual network.

Navigate to Blueprint > Staged > Connectivity Templates to view the templates and assign them to leaf switches. When assigned to leaf switches, a tagged aggregated ethernet interface is created connecting the servers.

Figure 46: Apstra Generated Connectivity Templates Graphical user interface, application Description automatically generated
Figure 47: Assign Connectivity Template for Each Network to Leaf Switches A screenshot of a computer Description automatically generated

Then, navigate to Blueprint > Uncommitted to review the uncommitted changes and commit the overlay configuration. Alternatively, also review the configuration generated for each leaf switch to which the overlay network is created by navigating to Blueprint > Staged > Physical > Nodes and check the configuration.

Verify Overlay Connectivity for Blue and Red Network

Having committed changes in the Apstra UI, these changes are now applied to the switches.

To begin verifying the fabric’s configuration, log in to the console of each of the leaf switches.

From the CLI of the leaf switches, enter the following commands:

This output displays multiple IRB interfaces and the configured routing instances for the Blue and Red networks.

Red Network IRB on one of the leaf switches:

Blue Network IRB on one of the leaf switches:

Since Apstra now, by default, uses MAC-VRF routing mode, the same can be seen from the below command output for all the Red and Blue network VLANs.

Verify that ERB is Configured on Leaf Switches

Within the CLI of the leaf switches, enter the following commands:

The output of this command displays the distributed gateways on all switches.

The gateways display 10.11.1.1, 10.11.2.1 for the Red network, and 10.12.1.1, 10.12.2.1 for the Blue network. These IRB configurations apply only to devices assigned in the connectivity templates. No other fabric switches have this IRB configured unless assigned via the connectivity template.

Verify the Leaf Switch Routing Table

Within the CLI of the leaf switches, enter the following commands:

The output of this command displays the routes for the VRFs Red network for one of the leaf switches.

The output of this command displays the routes for the VRFs Blue network for one of the leaf switches.

The following command shows the ESI leaf switches overlay. It shows that the remote leaf VNIs are exchanged between the ESI leaf switches.

Configure External Router and Inter-VRF Routing

For this JVD, an MX204 router is used as an external router to perform external routing and also for inter-VRF route leaking between the Red and Blue networks. Configuring an external router is similar to adding a generic server. The MX204 router is connected to the border leaf switches, which act as an external gateway to the data center fabric.

To add the MX router as an external router, navigate to Apstra UI, Blueprint > Staged > Topology, and click on the border leaf switch to add an external generic system and the connections to the external generic system, as shown in Figure 50.

On the following graphic, select the interface for border leaf1 and the MX204 device and its interface and click Add Link.

Figure 48: Adding MX204 as External Generic System Adding MX204 as External Generic System

Next navigate to Stage > Policies > Routing Policies and create an external routing policy to export the route to the external router. This policy is then applied to the connectivity template to allow for exporting Red and Blue network routes as is covered in the next steps.

Figure 49: External Router Policy A screenshot of a computer Description automatically generated

Next, navigate to the connectivity template on the blueprint and add the below connectivity template to add IP links, BGP peering, and routing policy with MX204 (external router). In the case of this JVD, the Red and Blue networks are routed towards the MX204, where inter-VRF routing is performed. VLAN 299 is used for the Red network and VLAN 399 for the Blue network.

Figure 50: IP Links for Red and Blue VRF IP Links for Red and Blue VRF
Figure 51: BGP Peering to MX for Red and Blue VRF BGP Peering to MX for Red and Blue VRF
Figure 52: Routing Policy for Red and Blue VRF Routing Policy for Red and Blue VRF

Then navigate to Staged > Virtual > Routing Zone, click on Red VRF Network, and scroll below to add IP interface links from both border leaf switches. The same is performed for Blue VRF networks.

Figure 53: Adding IP Interface Links for Red Network A screenshot of a computer Description automatically generated
Figure 54: Adding IP Interface Links for Blue Network A screenshot of a computer Description automatically generated

Commit the blueprint to push configs to the two border leaf switches. Note that the external router needs to be configured manually, as Apstra does not manage the MX204. For the configuration MX204 router, the interfaces are configured using the IPs used above in Figure 53 and Figure 56.

MX204 configuration snippet for the Red and Blue networks:

For inter-VRF routing, a policy is configured on the MX as below to enable inter-VRF routing between the Red and Blue VRF networks. Both VRFs have been configured on the border leaf switches to BGP peer with the MX204 (external router). The MX204 uses a BGP routing policy to exchange inter-VRF routes.

Note:

Apstra can also configure inter-VRF routing between the Red and Blue networks without needing an external router. Refer the Apstra guide for more information. It is recommended that any changes made to any settings be thoroughly tested. For this JVD, the “Route Target Overlaps Allow internal route-target policies” setting was not used. If this setting is set to ‘No Warning’, then each of the routing zones, such as Red and Blue, can be changed to allow for route target exchange using import and export route target policies within Apstra.

MX204 configuration snippet for inter-VRF:

Apstra UI: Blueprint Dashboard, Analytics, probes, Anomalies

The managed switches generate vast amounts of data about switch health and network health. To analyze these with respect to the data center network, Apstra uses Intent-Based Analytics that combines the intent from the graph1 with switch-generated data to provide the data center network view using the Apstra Dashboard.

Note:

Apstra uses a graph model to represent data center infrastructure, policies, etc. All information about the network is modeled as nodes and relationships between them. The graph model can be queried for data and used for analysis and automation. For more information on Apstra graph model and queries refer to the Apstra user Guide.

Analytics Dashboard, Anomalies, Probes and Reports

Apstra also provides predefined dashboards that collect data from devices. With the help of IBA probes, Apstra combines intent with data to provide real-time insight into the network, which can be inspected using Apstra GUI or Rest API. The IBA probes can be configured to raise anomalies based on the thresholds. It recommended to analyze the amount of data generated by probes to ensure the disk space of Apstra server is able accommodate IBA operation. By adjusting the log rotation setting, the disk usage can be reduced.

Apstra allows the creation of custom dashboards; refer to the Apstra User Guide for more information. From the blueprint, navigate to Analytics > Dashboards to view the analytics dashboard.

Figure 55: Analytics Dashboard A screenshot of a computer Description automatically generated

The analytics dashboard displays the status of all device health statuses. In case of anomalies, click on the anomalies tab to view anomalies. The blueprint anomalies tab displays a “No Anomalies!” message in case no anomalies are detected by the IBA probes. For more information, refer to the Apstra User Guide.

Figure 56: Blueprint Anomalies A screenshot of a computer Description automatically generated

To view the probes configured, navigate to Blueprint > Analytics > Probes. Here, actions can be performed to edit, clone, or delete probes. For instance, if a probe anomaly needs to be suppressed, the same can be performed by editing the probe.

Figure 57: Apstra Predefined Probes A screenshot of a computer Description automatically generated

To raise or suppress an anomaly, mark or unmark the Raise Anomaly check box.

Figure 58: Configure Probe Anomaly Configure Probe Anomaly

To generate reports, navigate to Blueprints > Analytics > Reports. Here, reports can be downloaded to analyze health, device traffic, and so on.

Figure 59: Generate Health Report A screenshot of a computer Description automatically generated

Root Cause Identification (RCI) is a technology integrated into Apstra software that automatically determines the root causes of complex network issues. RCI leverages the Apstra datastore for real-time network status and automatically correlates telemetry with each active blueprint intent. Root cause use cases include, for instance, link down, link miscabled, Interface down, link disconnect, and so on.

Figure 60: Enable Root Cause Analysis A screenshot of a computer Description automatically generated
Figure 61: Root Cause Enabled for Connectivity A screenshot of a computer Description automatically generated