Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Enabling Junos Fusion Enterprise on an Enterprise Campus Network

 

This network configuration example illustrates how to:

  • Configure a complex Junos Fusion Enterprise for an Enterprise network with satellite devices that provide access interfaces at multiple sites and branch offices.

  • Configure commonly-used features for the Enterprise network—VLANs, Power over Ethernet, and LLDP—in the Junos Fusion Enterprise.

This topic covers:

Requirements

This example uses the following hardware and software components for the Junos Fusion Enterprise:

  • Two EX9208 switches as aggregation devices, each running Junos OS Release 17.2R1.

  • Ten total satellite devices in three different buildings, all running satellite software version 3.1R1.

    • Building 1: satellite device cluster with six member satellite devices.

    • Building 2: satellite device cluster with three member satellite devices.

    • Branch office 1: a single standalone satellite device.

  • One EX3300 switch in branch office 2, running Junos OS Release 12.3R2 and connected to one of the EX9208 aggregation device switches without participating in the Junos Fusion Enterprise topology.

Overview and Topology

In this example, a Junos Fusion Enterprise is used to provide access ports for a campus network that includes two main buildings—one main building with six total switches and a second main building with three total switches—and a small branch office location that supports a single switch. All switches in the Junos Fusion Enterprise are dual-homed to two EX9208 switches acting as aggregation devices.

The enterprise network topology includes a second branch office that uses an EX3300 switch that isn’t participating in the Junos Fusion Enterprise—the EX3300 switch runs Junos OS software, not satellite software—to connect its users to the network. The EX3300 switch supports a limited number of users in branch office 2, and connects to aggregation device 2 only to simplify the network and save on cabling costs. The EX3300 switch can connect to this EX9200 switch because EX9200 switches are able to simultaneously provide ports to devices that are and are not part of a Junos Fusion.

The Junos Fusion Enterprise topology provides redundancy through a dual aggregation device topology where both aggregation devices are in separate off-campus buildings. Satellite device clusters are used for the satellite devices in the two main campus buildings to minimize cabling requirements and costs for aggregation device to satellite device connections.

The EX4300-48P and EX4300-48T switches in building 1, building 2, and branch office 1 provide access interfaces for users in those buildings. The topology includes an IRB interface on both aggregation devices that provides Layer 3 connectivity. Other commonly used campus networking features—VLANs, 802.1X, Power over Ethernet, LLDP, and other features—are enabled in the campus network that the Junos Fusion Enterprise is supporting.

Figure 1 shows a high-level overview of the topology used in this example.

Figure 1: Enterprise Network Topology
Enterprise
Network Topology

Both aggregation devices in this topology are EX9208 switches with an EX9200-6QS (6-port 40-Gigabit Ethernet QSFP+, 24-port 10-Gigabit Ethernet SFP+) line card installed in slot 0. The aggregation devices are interconnected using an interchassis link (ICL) and a dedicated ICCP link. Both aggregation devices are running Junos OS Release 17.2R1.

Table 1: Aggregation Devices

HostnameSwitch ModelLine CardsCascade PortsInterchassis link (ICL) Ports

ICCP Link Ports

Junos OS version

ad1-ex9208

EX9208

Slot 0: EX9200-6QS (6-port 40-Gigabit Ethernet QSFP+, 24-port 10-Gigabit Ethernet SFP+ line card)

et-0/2/0: FPC 102 (building1 cluster)

et-0/2/1: FPC 105 (building1 cluster)

et-0/2/2: FPC 112 (building2 cluster)

xe-0/0/0: FPC 121

et-0/3/1: Aggregation Device 2

et-0/3/2: Aggregation Device 2

et-0/3/1 and et-0/3/2 are the member links in ae100.

et-0/3/0: Aggregation Device 2

17.2R1

ad2-ex9208

EX9208

Slot 0: EX9200-6QS (6-port 40-Gigabit Ethernet QSFP+, 24-port 10-Gigabit Ethernet SFP+ line card)

et-0/2/0: FPC 102 (building1 cluster)

et-0/2/1: FPC 105 (building1 cluster)

et-0/2/2: FPC 112 (building2 cluster)

xe-0/0/0: FPC 121

et-0/3/1: Aggregation Device 1

et-0/3/2: Aggregation Device 1

et-0/3/1 and et-0/3/2 are the member links in ae100.

et-0/3/0: Aggregation Device 1

17.2R1

The Junos Fusion Enterprise topology includes three sites—building 1, building 2, and branch office 1—with satellite devices that provide access ports to end users. Building 1 includes six total satellite devices interconnected into a satellite device cluster. Two satellite devices in the cluster—FPC ID 102 and 105—use 40-Gbps+ QSFP connections to connect to each aggregation device. Building 2 includes three satellite devices interconnected into a satellite device cluster that connect to the aggregation devices through FPC ID 112. Branch office 1 includes one satellite device—FPC ID 121—configured as a standalone satellite device.

The satellite device connections are presented in Figure 2 and Figure 3. Figure 3 also shows the connections that connect the EX3300 switch running Junos OS in branch office 2 to the EX9208 switch that also functions as aggregation device 2, although the EX3300 switch is not a satellite device in the Junos Fusion Enterprise.

Figure 2: Satellite Devices to Aggregation Device 1 Connections
 Satellite
Devices to Aggregation Device 1 Connections
Figure 3: Satellite Devices to Aggregation Device 2 Connections
Satellite
Devices to Aggregation Device 2 Connections

The satellite devices in the Junos Fusion topology are all in building 1, building 2, or branch office 1. All satellite devices in the topology are EX4300-48P or EX4300-48T switches running satellite software version 3.1R1. In building 1, all satellite devices in the satellite device cluster pass traffic to the aggregation device through FPC 102 or FPC 105, which are the two switches with uplink port connections to the aggregation devices. In building 2, all traffic is passed to the aggregation devices through FPC 112, which is the only device in the cluster with uplink port connections to the aggregation devices. FPC 121 is a standalone satellite device to provide access ports to branch office 1.

Table 2: Satellite Devices

FPC IDSwitch ModelSystem IDUplink PortsClustering PortsSatellite Software Version
Satellite Device Cluster building1

101

EX4300-48P

00:00:5E:00:53:A1

No uplink ports

xe-0/1/2: FPC 106

xe-0/1/3: FPC 102

3.1R1

102

EX4300-48P

00:00:5E:00:53:A2

et-0/1/0: Aggregation Device 1

et-0/1/1: Aggregation Device 2.

xe-0/1/2: FPC 101

xe-0/1/3: FPC 103

3.1R1

103

EX4300-48P

00:00:5E:00:53:A3

No uplink ports

xe-0/1/2; FPC 102

xe-0/1/3: FPC 104

3.1R1

104

EX4300-48T

00:00:5E:00:53:A4

No uplink ports

xe-0/1/2; FPC 103

xe-0/1/3: FPC 105

3.1R1

105

EX4300-48T

00:00:5E:00:53:A5

et-0/1/0: Aggregation Device 1

et-0/1/1: Aggregation Device 2.

xe-0/1/2; FPC 104

xe-0/1/3: FPC 106

3.1R1

106

EX4300-48T

00:00:5E:00:53:A6

No uplink ports

xe-0/1/2; FPC 105

xe-0/1/3: FPC 101

3.1R1

Satellite Device Cluster: building2

111

EX4300-48P

00:00:5E:00:53:B1

No uplink ports

xe-0/1/2: FPC 113

xe-0/1/3: FPC 112

3.1R1

112

EX4300-48P

00:00:5E:00:53:B2

et-0/1/0: Aggregation Device 1

et-0/1/1: Aggregation Device 2.

xe-0/1/2: FPC 111

xe-0/1/3: FPC 113

3.1R1

113

EX4300-48P

00:00:5E:00:53:B3

No uplink ports

xe-0/1/2; FPC 112

xe-0/1/3: FPC 111

3.1R1

Branch Office 1: Standalone Satellite Device

121

EX4300-48P

00:00:5E:00:53:C1

xe-0/1/0: Aggregation Device 1

xe-0/1/1: Aggregation Device 2.

Not applicable

3.1R1

The satellite device clusters are shown in Figure 4.

Figure 4: Building 1 and 2 Satellite Device Clusters
Building 1
and 2 Satellite Device Clusters

The Enterprise campus topology also includes one standalone switch that is not participating in the Junos Fusion Enterprise, an EX3300 switch running Junos OS Release 12.3R2 that provides access ports to branch office 2. The EX3300 is connected to the EX9208 switch acting as aggregation device 2 using an aggregated Ethernet bundle.

Table 3: Standalone Switches

HostnameSwitch ModelInterface connectionsJunos OS Release

branch-office2-ex3300

EX3300

Aggregated ethernet bundle connection to aggregation device 2:

ae0:

  • xe-0/1/0 (member link 1)

  • xe-0/1/1 (member link 2)

12.3R2

Configuring the Junos Fusion Enterprise

This section provides the steps for configuring the Junos Fusion Enterprise on both aggregation devices.

It includes the following sections:

Assigning FPC ID Numbers to Satellite Devices, Configuring the Satellite Device Clusters, and Enabling Automatic Satellite Conversion

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level of the specified aggregation device, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

To assign FPC ID numbers to satellite devices, configure the satellite device clusters, and enable automatic satellite conversion:

  1. Create the satellite device clusters, and associate each satellite device cluster with a cluster ID:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set cluster building1 cluster-id 1

    user@ad1-ex9208# set cluster building2 cluster-id 2

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set cluster building1 cluster-id 1

    user@ad2-ex9208# set cluster building2 cluster-id 2
  2. Associate each satellite device in the building1 cluster with a cluster member ID number and an FPC ID:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set cluster building1 fpc 101 member-id 1 system-id 00:00:5E:00:53:A1

    user@ad1-ex9208# set cluster building1 fpc 102 member-id 2 system-id 00:00:5E:00:53:A2

    user@ad1-ex9208# set cluster building1 fpc 103 member-id 3 system-id 00:00:5E:00:53:A3

    user@ad1-ex9208# set cluster building1 fpc 104 member-id 4 system-id 00:00:5E:00:53:A4

    user@ad1-ex9208# set cluster building1 fpc 105 member-id 5 system-id 00:00:5E:00:53:A5

    user@ad1-ex9208# set cluster building1 fpc 106 member-id 6 system-id 00:00:5E:00:53:A6

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set cluster building1 fpc 101 member-id 1 system-id 00:00:5E:00:53:A1

    user@ad2-ex9208# set cluster building1 fpc 102 member-id 2 system-id 00:00:5E:00:53:A2

    user@ad2-ex9208# set cluster building1 fpc 103 member-id 3 system-id 00:00:5E:00:53:A3

    user@ad2-ex9208# set cluster building1 fpc 104 member-id 4 system-id 00:00:5E:00:53:A4

    user@ad2-ex9208# set cluster building1 fpc 105 member-id 5 system-id 00:00:5E:00:53:A5

    user@ad2-ex9208# set cluster building1 fpc 106 member-id 6 system-id 00:00:5E:00:53:A6
  3. Associate each satellite device in the building2 cluster with a cluster member ID number and an FPC ID:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set cluster building2 fpc 111 member-id 1 system-id 00:00:5E:00:53:B1

    user@ad1-ex9208# set cluster building2 fpc 112 member-id 2 system-id 00:00:5E:00:53:B2

    user@ad1-ex9208# set cluster building2 fpc 113 member-id 3 system-id 00:00:5E:00:53:B3

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set cluster building2 fpc 111 member-id 1 system-id 00:00:5E:00:53:B1

    user@ad2-ex9208# set cluster building2 fpc 112 member-id 2 system-id 00:00:5E:00:53:B2

    user@ad2-ex9208# set cluster building2 fpc 113 member-id 3 system-id 00:00:5E:00:53:B3
  4. Create an FPC ID for the standalone satellite switch in branch office 1:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set fpc 121 system-id 00:00:5E:00:53:C1

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set fpc 121 system-id 00:00:5E:00:53:C1
  5. Associate each satellite device cluster or standalone satellite device with a cascade port or ports.

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set cluster building1 cascade-ports [et-0/2/0 et-0/2/1]

    user@ad1-ex9208# set cluster building2 cascade-ports et-0/2/2

    user@ad1-ex9208# set fpc 121 cascade-ports xe-0/0/0

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set cluster building1 cascade-ports [et-0/2/0 et-0/2/1]

    user@ad2-ex9208# set cluster building2 cascade-ports et-0/2/2

    user@ad2-ex9208# set fpc 121 cascade-ports xe-0/0/0
    Note

    This step associates standalone satellite devices or satellite device clusters with cascade ports. The procedure for configuring interfaces into cascade ports is provided later in this network configuration example.

  6. Enable automatic satellite conversion for all configured satellite devices:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set auto-satellite-conversion satellite 101-121

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set auto-satellite-conversion satellite 101-121

Results

From configuration mode, confirm your configuration by entering the show chassis satellite-management command individually on each aggregation device.

Output for aggregation device 1 only is provided below. The show chassis satellite-management output on aggregation device 2 should match this output.

Enter show chassis satellite-management on aggregation device 2 to confirm the configuration on aggregation device 2.

Creating the Cascade Ports

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

To create the cascade ports:

  1. Configure the cascade ports.

    Aggregation device 1:

    [edit interfaces]

    user@ad1-ex9208# set et-0/2/0 cascade-port description cascade-to-building1-fpc102
    user@ad1-ex9208# set et-0/2/1 cascade-port description cascade-to-building1-fpc105
    user@ad1-ex9208# set et-0/2/2 cascade-port description cascade-to-building2-fpc112
    user@ad1-ex9208# set xe-0/0/0 cascade-port description cascade-to-branch-office1-fpc121

    Aggregation device 2:

    [edit interfaces]

    user@ad2-ex9208# set et-0/2/0 cascade-port description cascade-to-building1-fpc102
    user@ad2-ex9208# set et-0/2/1 cascade-port description cascade-to-building1-fpc105
    user@ad2-ex9208# set et-0/2/2 cascade-port description cascade-to-building2-fpc112
    user@ad2-ex9208# set xe-0/0/0 cascade-port description cascade-to-branch-office1-fpc121

Results

From configuration mode, confirm your configuration by entering the show interfaces command individually on each aggregation device.

Output for aggregation device 1 only is provided below. The show interfaces output on aggregation device 2 should match this output.

Managing the Satellite Software Upgrade Groups

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level:

Aggregation device 1:

Aggregation device 2:

To complete this quick configuration, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the operational mode prompt (>):

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

Satellite software upgrade groups that inherit the name of the satellite device cluster are automatically created for satellite device clusters. Satellite software upgrade groups must be created for standalone satellite devices.

This procedure associates satellite software with the automatically-created software upgrade groups for the satellite device clusters, and creates a new satellite software upgrade group for all standalone satellite devices in the topology.

Before you begin:

  • Copy the satellite software 3.1R1 image onto the EX9208 switches. You can navigate to the satellite software starting from the Junos Fusion Hardware and Software Compatibility Matrices.

    File copying options are beyond the scope of this network configuration example. These instructions assume a satellite software image has been installed to the var/tmp directory on each aggregation device.

  1. Create a satellite software upgrade group named standalone-satellite-devices and associate FPC ID 121—the only standalone satellite device in the topology—with the satellite software upgrade group:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# set upgrade-groups standalone-satellite-devices satellite 121

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# set upgrade-groups standalone-satellite-devices satellite 121

  2. Commit the configuration to both Routing Engines:

    Aggregation device 1:

    [edit]

    user@ad1-ex9208# commit and-quit synchronize

    Aggregation device 2:

    [edit]

    user@ad2-ex9208# commit and-quit synchronize
  3. Start the upgrade by associating all satellite software upgrade groups with the satellite software version 3.1R1 image:Note

    These instructions assume the satellite software has already been downloaded to the /var/tmp folder on each EX9208 switch.

    Aggregation device 1:

    user@ad1-ex9208> request system software add /var/tmp/satellite-3.1R1.4-signed.tgz upgrade-group building1
    user@ad1-ex9208> request system software add /var/tmp/satellite-3.1R1.4-signed.tgz upgrade-group building2
    user@ad1-ex9208> request system software add /var/tmp/satellite-3.1R1.4-signed.tgz upgrade-group standalone-satellite-devices

    Aggregation device 2:

    user@ad2-ex9208> request system software add /var/tmp/satellite-3.1R1.4-signed.tgz upgrade-group building1
    user@ad2-ex9208> request system software add /var/tmp/satellite-3.1R1.4-signed.tgz upgrade-group building2
    user@ad2-ex9208> request system software add /var/tmp/satellite-3.1R1.4-signed.tgz upgrade-group standalone-satellite-devices

Results

From configuration mode, confirm the user-configured satellite software upgrade group is configured by entering the show chassis satellite-management upgrade-groups command individually on each aggregation device.

Verify that the satellite software installation was successful for each software upgrade group by entering the show chassis satellite software command to confirm the running satellite software versions:

Note

The show chassis satellite software command generates output only after the satellite software upgrades are complete. If you enter the show chassis satellite software command and no output is generated, consider re-entering the command in a few minutes.

Configuring Dual Aggregation Device Support

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

To configure dual aggregation device support:

  1. (Optional unless single-home was previously configured) Delete single home configuration mode on each EX9208 switch to ensure single-home configuration is disabled:

    Aggregation device 1:

    [edit chassis satellite-management]

    user@ad1-ex9208# delete single-home

    Aggregation device 2:

    [edit chassis satellite-management]

    user@ad2-ex9208# delete single-home
  2. Define the chassis ID number of each aggregation device:

    Aggregation device 1:

    [edit chassis satellite-management redundancy-groups]

    user@ad1-ex9208# set chassis-id 1

    Aggregation device 2:

    [edit chassis satellite-management redundancy-groups]

    user@ad2-ex9208# set chassis-id 2

  3. Create the satellite management redundancy group:

    Aggregation device 1:

    [edit chassis satellite-management redundancy-groups]

    user@ad1-ex9208# set enterprise-campus-network redundancy-group-id 1

    Aggregation device 2:

    [edit chassis satellite-management redundancy-groups]

    user@ad2-ex9208# set enterprise-campus-network redundancy-group-id 1

  4. Define the satellite devices and satellite device clusters that are part of the redundancy group.

    Aggregation device 1:

    [edit chassis satellite-management redundancy-groups]

    user@ad1-ex9208# set enterprise-campus-network cluster building1

    user@ad1-ex9208# set enterprise-campus-network cluster building2

    user@ad1-ex9208# set enterprise-campus-network satellite 121

    Aggregation device 2:

    [edit chassis satellite-management redundancy-groups]

    user@ad2-ex9208# set enterprise-campus-network cluster building1

    user@ad2-ex9208# set enterprise-campus-network cluster building2

    user@ad2-ex9208# set enterprise-campus-network satellite 121

  5. Configure the interface on each side of the ICL as a trunk interface, and make each interface a member of at least one VLAN.

    Aggregation device 1:

    [edit]

    user@ad1-ex9208# set interfaces ae100 unit 0 family ethernet-switching interface-mode trunk

    user@ad1-ex9208# set interfaces ae100 unit 0 family ethernet-switching vlan members v10

    Aggregation device 2:

    [edit]

    user@ad2-ex9208# set interfaces ae100 unit 0 family ethernet-switching interface-mode trunk

    user@ad2-ex9208# set interfaces ae100 unit 0 family ethernet-switching vlan members v10

Results

From configuration mode, confirm your configuration by entering the show chassis satellite-management redundancy-groups command on each aggregation device.

Aggregation device 1:

Aggregation device 2:

Confirm the trunk mode interface configuration by entering the show interfaces ae100 command on each aggregation device.

Aggregation device 1:

Aggregation device 2:

Configuring the Interchassis Link (ICL)

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

An interchassis link (ICL), also known as the interchassis link-protection link (ICL-PL), is used to forward data traffic across MC-LAG peers. In a Junos Fusion Enterprise, the MC-LAG peers are the aggregation devices.

An ICL provides redundancy when a link failure occurs. We recommend that the ICL be an aggregated Ethernet interface in most environments deploying a Junos Fusion Enterprise, but the ICL can be a single physical Ethernet interface in some smaller-scale setups.

This section illustrates how to configure an aggregated ethernet interface composed of two 40-Gbps interfaces into an ICL. The configuration uses the same interfaces on both aggregation devices and is therefore configured using configuration groups that are synchronized between aggregation devices.

To configure the ICL:

  1. Create the configuration groups for the ICL:

    Aggregation device 1:

    Aggregation device 2:

  2. Configure the two 40-Gbps interfaces that will be used as the ICL into an aggregated ethernet interface:
    • Configure the number of aggregated ethernet interfaces on the device:

      Note

      The device count is a global parameter that applies to all aggregated ethernet interfaces on the EX9200 switch acting as the aggregation device, including aggregated ethernet interfaces that are not part of the Junos Fusion Enterprise topology. Configure the aggregated ethernet device count that is appropriate for your EX9200 switch.

    • Create and name the aggregated ethernet interface. Add an optional description for the aggregated ethernet interface:

    • Add the member links to the aggregated ethernet interface.

      Because both aggregation devices use the same member interfaces to form the same aggregated ethernet interface, this configuration can be performed within the configuration group.

    • Enable LACP:

  3. Configure the aggregated ethernet interface as the ICL.

    The peer chassis ID variable is a unique value on each aggregation device, so this step is performed outside the configuration group on each aggregation device.

    This steps assumes that the redundancy groups used to create dual aggregation device support have already been configured. See Configuring Dual Aggregation Device Support.

    Aggregation device 1:

    Aggregation device 2:

  4. Commit the configuration on each aggregation device, starting with aggregation device 1.

    Aggregation device 1:

    Aggregation device 2:

Configuring the Inter-Chassis Control Protocol (ICCP) Link

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

Inter-Chassis Control Protocol (ICCP) is used in MC-LAG topologies to exchange control information between the devices in the topology. A Junos Fusion Enterprise with two aggregation devices is an MC-LAG topology, and is therefore always running ICCP. See Multichassis Link Aggregation Features, Terms, and Best Practices for additional information on ICCP.

A dedicated ICCP link is highly recommended in a Junos Fusion Enterprise deployment, but is not required. ICCP traffic is transmitted across the ICL when an ICCP link is not configured.

The MC-LAG configuration used in this network configuration example includes a dedicated ICCP link between the aggregation devices. The instructions for configuring the ICCP link are provided in this procedure.

An ICCP link can be one link or an aggregated ethernet interface. In most Junos Fusion Enterprise deployments, we recommend using a 40-Gbps link—which is used in this procedure—or an aggregated ethernet interface as the ICCP link.

To manually configure a dedicated ICCP link:

  1. (Optional. Recommended) Create a description for the ICCP link interface.

    Aggregation device 1:

    Aggregation device 2:

  2. Configure the IP address of the interface at each end of the ICCP link.

    Aggregation device 1:

    Aggregation device 2:

Configuring the Inter-Chassis Control Protocol (ICCP)

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

Inter-Chassis Control Protocol (ICCP) is used in MC-LAG topologies to exchange control information between the devices in the topology. A Junos Fusion Enterprise with two aggregation devices is an MC-LAG topology, and is therefore always running ICCP. See Multichassis Link Aggregation Features, Terms, and Best Practices for additional information on ICCP.

Junos Fusion Enterprise supports automatic ICCP provisioning, which automatically configures ICCP in a dual aggregation device setup without any user action. Automatic ICCP provisioning is enabled by default and is often the preferred method of enabling ICCP for a Junos Fusion in greenfield deployments that are not being integrated into an existing network. If you are installing your Junos Fusion Enterprise in an environment that doesn’t have to integrate into an existing campus network, you can usually ignore the instructions in this section. Automatic ICCP provisioning is described in more detail in Understanding Automatic ICCP Provisioning and Automatic VLAN Provisioning of an Interchassis Link.

Many Junos Fusion Enterprise installations occur in brownfield deployments and the Junos Fusion Enterprise has to be integrated within an existing campus network. Brownfield deployments often have a need to maintain existing ICCP settings, in particular in scenarios where a Junos Fusion Enterprise is replacing an MC-LAG topology or is supporting a campus network that includes other MC-LAG topologies.

In this network configuration example, some ICCP parameters—the session establishment hold time, backup peer IP, and some BFD intervals—are modified to ensure ICCP can function properly in a Junos Fusion Enterprise that is being installed into an existing campus network.

To manually configure ICCP in this example:

  1. Configure the local IP address on each end of the ICCP link.

    Aggregation device 1:

    Aggregation device 2:

  2. Configure the backup peer IP address on each aggregation device. The backup peer IP address is the management IP address of the other aggregation device. These instructions assume the management IP addresses of each aggregation device are reachable from one another over a Layer 3 management network.

    Aggregation device 1:

    Aggregation device 2:

  3. Define the redundancy group ID number for the redundancy group. The redundancy group is both aggregation devices.

    Aggregation device 1:

    Aggregation device 2:

  4. Configure the session establishment hold timer. The session establishment hold timer defines the maximum amount of time, in seconds, that can be taken for an Inter-Chassis Control Protocol (ICCP) connection to establish between peers.

    Aggregation device 1:

    Aggregation device 2:

  5. Configure the Bidirectional Forwarding Detection (BFD) parameters. The BFD minimum interval is the interval at which the peer transmits liveness detection requests and the minimum interval at which the peer expects to receive a reply from a peer. The multiplier is the number of liveness detection requests not received by the peer before Bidirectional Forwarding Detection (BFD) declares the peer as down.

    Aggregation device 1:

    Aggregation device 2:

Preparing the Satellite Devices

CLI Quick Configuration

To complete this quick configuration, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the command into the CLI at the operational mode prompt (>) of each individual satellite device:

After the switch reboots:

Step-by-Step Procedure

To prepare the EX4300-48T and EX4300-48P switches to become satellite devices, perform the following steps:

Note

These instructions assume each EX4300-48T and EX4300-48P switch is already running Junos OS Release 14.1X53-D35. See Installing Software on an EX Series Switch with a Single Routing Engine (CLI Procedure) for instructions on upgrading Junos OS software.

  1. If you plan on using the satellite device interfaces to provide PoE, check the satellite device’s PoE firmware version:
    • Enter the show chassis firmware detail command to learn the PoE firmware version running on the device.

    • The satellite device must have the following minimum PoE versions to support PoE in a Junos Fusion Enterprise.

      Table 4: Minimum PoE Firmware Versions

      Satellite Device PlatformMinimum PoE Firmware Version

      EX2300

      1.6.1.1.9

      EX3400

      1.6.1.1.9

      EX4300

      2.6.3.92.1

      QFX5100

      No minimum version requirement

      See Minimum Satellite Device Firmware Version Requirements table for additional information on firmware version requirements for devices in a Junos Fusion Enterprise.

    • If your device meets the minimum PoE firmware requirement, proceed to the next step.

      If a PoE firmware update is required, upgrade the PoE firmware. See Upgrading the PoE Controller Software.

  2. Log into each switch’s console port, and zeroize it.Note

    Perform this procedure from the console port connection. A management connection will be lost when the switch is rebooted to complete the zeroizing procedure.

    The following sample output shows how to perform this procedure on the satellite device using FPC ID 101. Repeat this procedure for each satellite device.

    user@sd101-con> request system zeroize
    Note

    The devices reboot to complete the zeroizing procedure.

  3. (EX3400 and EX4300 satellite devices only) After the switches reboot, convert the built-in 40-Gbps interfaces with QSFP+ transceivers from Virtual Chassis ports (VCPs) into network ports on each switch:

    The following sample output shows how to perform this procedure on the satellite device using FPC ID 101. Repeat this procedure for each satellite device.

    Note

    This step is required for EX3400 and EX4300 switch uplink ports only because uplink ports on these switches are VCPs by default.

    You can skip this step if you are converting other switches into satellite devices.

    user@sd101-con> request virtual-chassis vc-port delete pic-slot 1 port 0

    user@sd101-con> request virtual-chassis vc-port delete pic-slot 1 port 1

    user@sd101-con> request virtual-chassis vc-port delete pic-slot 1 port 2

    user@sd101-con> request virtual-chassis vc-port delete pic-slot 1 port 3
  4. Cable each switch into the Junos Fusion Enterprise, if you haven’t already done so.

    Because automatic satellite conversion is enabled and the satellite software upgrade groups have been configured, the satellite software installation process starts for each satellite device when it is cabled into the Junos Fusion Enterprise.

    Note

    If the satellite software installation does not begin, log onto the aggregation devices and ensure the configurations added in previous steps have been committed.

Connecting the EX9200 Switch to the EX3300 Access Switch That is Not Participating in the Junos Fusion Enterprise

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 2:

Branch office switch 2:

Step-by-Step Procedure

To connect the EX3300 access switch in branch office 2 that is not participating in the Junos Fusion Enterprise to the EX9208 switch acting as aggregation device 2:

  1. Configure the aggregated Ethernet interface on the EX9208 switch side of the connection:
    [edit interfaces]

    user@ad2-ex9208# set ae0 aggregated-ether-options minimum-links 1
    user@ad2-ex9208# set ae0 aggregated-ether-options link-speed 10g
  2. Specify the aggregated Ethernet bundle member interfaces on the EX9208 switch side of the connection:
    [edit interfaces]

    user@ad2-ex9208# set xe-0/0/1 ether-options 802.3ad ae0

    user@ad2-ex9208# set xe-0/0/2 ether-options 802.3ad ae0
  3. Specify the bundle into the ethernet switching family, set the interface mode to trunk, and configure it to be a member of all VLANs:
    [edit interfaces]

    user@ad2-ex9208# set ae0 unit 0 family ethernet-switching interface-mode trunk vlan members all
  4. Enable LACP:
    [edit interfaces]

    user@ad2-ex9208# set ae0 aggregated-ether-options lacp active
  5. Specify the maximum number of aggregated Ethernet interfaces that can be created on the switch:
    [edit chassis]

    user@ad2-ex9208# set aggregated-devices ethernet device-count 10
  6. Perform the same procedure for the EX3300 switch side of the connection:
    [edit]

    user@branch-office2-ex3300# set interfaces ae0 aggregated-ether-options minimum-links 1

    user@branch-office2-ex3300# set interfaces ae0 aggregated-ether-options link-speed 10g

    user@branch-office2-ex3300# set interfaces xe-0/1/0 ether-options 802.3ad ae0

    user@branch-office2-ex3300# set interfaces xe-0/1/1 ether-options 802.3ad ae0

    user@branch-office2-ex3300# set interfaces ae0 unit 0 family ethernet-switching interface-mode trunk vlan members all

    user@branch-office2-ex3300# set interfaces ae0 aggregated-ether-options lacp active

    user@branch-office2-ex3300# set chassis aggregated-devices ethernet device-count 3

Results

From configuration mode, confirm your aggregated ethernet interface settings by entering the show interfaces statement and the show chassis aggregated-devices from aggregation device 2 and the EX3300 switch.

Aggregation device 2:

Branch office 2:

Configuring Features on the Junos Fusion Enterprise

This section provides the steps for configuring some commonly-used features on the Junos Fusion Enterprise.

It includes the following sections:

Configuring Commit Synchronization Between Aggregation Devices

Step-by-Step Procedure

A Junos Fusion Enterprise using dual aggregation devices often requires matching configuration of a feature on both aggregation devices. Configuration synchronization can be used to ensure that configuration done in a configuration group is applied on both aggregation devices when committed. Configuration synchronization simplifies administration of a Junos Fusion Enterprise by allowing users to enter commands once in a configuration group and apply the configuration group to both aggregation devices rather than repeating a configuration procedure manually on each aggregation device.

The available group configuration options are beyond the scope of this document; see Understanding MC-LAG Configuration Synchronization and Synchronizing and Committing MC-LAG Configurations for additional information on using group configurations in an MC-LAG topology and Network Configuration Example: Configuring MC-LAG on EX9200 Switches in the Core for Campus Networks   for a detailed example of an MC-LAG topology that uses group configurations.

This network configuration example provides one method of using groups to synchronize configuration between aggregation devices. See Configuring Aggregation Devices as Peers for Configuration Synchronization.

Many features in this document—including VLANs, 802.1X, and manual ICCP configuration—assume that commit synchronization is enabled and are mostly or completely configured in groups to ensure expedient and consistent configuration. See those configuration procedures for illustrations of configurations done using commit synchronization.

To enable commit synchronization:

  1. Ensure the aggregation devices are reachable from one another:

    Aggregation device 1:

    Aggregation device 2:

    If the devices cannot ping one another, try statically mapping the hostnames of each device’s management IP address and retry the ping.

    Aggregation device 1:

    Aggregation device 2:

    If the devices cannot ping one another after the hostnames are statically mapped, see Connecting and Configuring an EX9200 Switch (CLI Procedure) or the Installation and Upgrade Guide for EX9200 Switches.

  2. Enable commit synchronization:

    Aggregation device 1:

    Aggregation device 2:

  3. Configure each aggregation device so that the other aggregation device is identified as a commit peer. Enter the authentication credentials of each peer aggregation device to ensure group configurations on one aggregation device are committed to the other aggregation device.Warning

    The password password is used in this configuration step for illustrative purposes only. Use a more secure password in your device configuration.

    Note

    This step assumes a user with an authentication password has already been created on each EX9208 switch acting as an aggregation device. For instructions on configuring username and password combinations, see Connecting and Configuring an EX9200 Switch (CLI Procedure).

    Aggregation device 1:

    Aggregation device 2:

  4. Enable the Network Configuration (NETCONF) protocol over SSH:

    Aggregation device 1:

    Aggregation device 2:

  5. Commit the configuration:

    Aggregation device 1:

    Aggregation device 2:

  6. (Optional) Create a configuration group for testing to ensure configuration synchronization is working:

    Aggregation Device 1:

    Aggregation Device 2:

  7. (Optional) Configure and commit a group on aggregation device 1, and confirm it is implemented on aggregation device 2:Note

    This step shows how to change one interface configuration using groups. Interface ranges cannot be specified within groups and synchronized between commit peers in a Junos Fusion Enterprise to configure multiple interfaces simultaneously.

    Aggregation device 1:

    Aggregation device 2:

    Perform the same procedure to verify configuration synchronization from aggregation device 2 to aggregation device 1, if desired.

    Delete the test configuration group on each aggregation device.

    Aggregation device 1:

    Aggregation device 2:

    Note

    All subsequent procedures in this network configuration example assume that commit synchronization is enabled on both EX9208 switches acting as aggregation devices, and that the aggregation devices are configured as peers in each configuration group.

Configuring VLANs

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Step-by-Step Procedure

To configure VLANs on both aggregation devices on the Junos Fusion topology using configuration groups:

  1. Create and name the configuration group, and configure the group to apply to the peer aggregation devices:
    [edit]

    user@ad1-ex9208# set groups vlans-config when peers [ad1-ex9208 ad2-ex9208]

  2. Create a VLAN—VLAN ID 10—by assigning it a name and a VLAN ID:
    [edit]

    user@ad1-ex9208# set groups vlans-config vlans v10 vlan-id 10

  3. Configure three extended port interfaces—ge-101/0/0, ge-111/0/0, and ge-121/0/0—into vlan v10.
    [edit]

    user@ad1-ex9208# set groups vlans-config interfaces ge-101/0/0 unit 0 family ethernet-switching interface-mode access vlan members v10

    user@ad1-ex9208# set groups vlans-config interfaces ge-111/0/0 unit 0 family ethernet-switching interface-mode access vlan members v10

    user@ad1-ex9208# set groups vlans-config interfaces ge-111/0/0 unit 0 family ethernet-switching interface-mode access vlan members v10

  4. Apply the configuration group:
    [edit]

    user@ad1-ex9208# set apply-groups vlans-config
  5. Commit the configuration:
    [edit]

    user@ad1-ex9208# commit

    The configuration commits to both aggregation devices, because automatic peer synchronization is enabled.

Results

Confirm the VLANs are configured by entering the show groups vlans-config command in configuration mode and are operational by entering the show vlans command in operational mode. The commands must be entered on both aggregation devices in order to confirm the group configuration applied the VLAN configuration to both aggregation devices.

Aggregation device 1:

Aggregation device 2:

Adding Layer 3 Support to a Junos Fusion Enterprise

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

In most campus networking environments, endpoint devices must have a path to send and receive Layer 3 traffic.

In a Junos Fusion Enterprise, integrated routing and bridging (IRB) interfaces are configured on aggregation devices to move traffic between Layer 2 and Layer 3.

DHCP relay is configured in this procedure to move DHCP packets through the Junos Fusion Enterprise. A VRRP group that includes both aggregation devices is also established.

This section provides the configuration instructions for configuring an IRB interface that moves the Layer 2 traffic in VLAN 10 into and out of Layer 3.

To configure IRB interfaces on the aggregation devices to move traffic between Layer 2 and Layer 3:

  1. Enable the IRB interface to respond to ARP requests between aggregation devices. The IRB interface number and IP address are configured as part of this process.

    In this procedure, the IRB interface on aggregation device 1 is assigned 192.168.42.2/24 and the IRB interface on aggregation device 2 is assigned 192.168.42.3/24. MAC addresses are assigned to each IRB interface. ARP requests are sent between the aggregation devices to share the MAC address to IP address bindings of the IRB interfaces on each aggregation device.

    Aggregation device 1:

    Aggregation device 2:

  2. Configure Virtual Router Redundancy Protocol (VRRP) to group both aggregation devices into one virtual device.

    The aggregation devices in this configuration are logically grouped into virtual address 192.168.42.1 using VRRP. Aggregation device 1 is the primary device in the VRRP group because it has the higher priority setting.

    Aggregation device 1:

    Aggregation device 2:

  3. Bind the IRB interface to VLAN 10:Note

    This configuration assumes that VLAN 10 is already configured. See the Configuring VLANs section of this guide for information on configuring VLANs.

    Aggregation device 1:

    Aggregation device 2:

  4. Enable DHCP Relay on the IRB interfaces.

    The DHCP Relay configurations must match on both aggregation devices to ensure consistent handling of DHCP packets throughout the Junos Fusion Enterprise. The IRB interfaces are grouped into the same DHCP relay group.

    Aggregation device 1:

    Aggregation device 2:

Enabling 802.1X

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

802.1X is an IEEE standard for port-based network access control (PNAC). It provides an authentication mechanism for devices seeking to access a LAN. The 802.1X authentication feature is based upon the IEEE 802.1X standard Port-Based Network Access Control.

A simple 802.1X configuration is enabled in this configuration to illustrate how 802.1X can be enabled on one extended port in a Junos Fusion Enterprise.

The range of 802.1X configuration options are beyond the scope of this document. For additional information on 802.1X, see 802.1X for Switches Overview and the Access Control Feature Guide for EX9200 Switches.

The following requirements should be understood when configuring 802.1X for a Junos Fusion Enterprise:

  • The authentication server cannot connect to the Junos Fusion Enterprise through an extended port.

  • 802.1X configuration must match on both aggregation devices in a Junos Fusion Enterprise. 802.1X , therefore, is configured using configuration groups that are applied to both aggregation devices using commit synchronization in this example.

  • 802.1X control is handled by either aggregation device on a per-session basis. Either aggregation device can act as the primary device for 802.1X control for any 802.1X session. If traffic flow through one aggregation device is disrupted during an 802.1X session, the 802.1X session may be interrupted and control could be transferred to the other aggregation device.

  • A captive portal cannot be configured on an extended port.

To enable 802.1X:

  1. Create the configuration groups for 802.1X configuration:

    Aggregation device 1:

    Aggregation device 2:

  2. Specify the name of the access profile to use for 802.1X authentication. The access profile contains the RADIUS server IP address and other information used for authentication.Note

    This configuration procedure does not cover access profile configuration. For information on configuring an access profile, see Connecting a RADIUS Server for 802.1X to an EX Series Switch.

    Aggregation device 1:

  3. Disable MAC table binding. By default, an 802.1X session is removed from the authentication session table when a MAC address is aged out of the Ethernet switching table. When MAC table binding is disabled, the 802.1X session remains active in the authentication table after a MAC address is aged out of the Ethernet switching table.

    Aggregation device 1:

  4. Enable the 802.1X supplicant mode for the interface. In this example, multiple supplicant mode—which authenticates all 802.1X clients individually while also allowing multiple simultaneous 802.1X sessions—is enabled on interface ge-101/0/0.

    Aggregation device 1:

  5. Configure the number of times the switch attempts to authenticate the port after an initial failure:

    Aggregation device 1:

  6. Configure the transmit period. The transmit period is the amount of time, in seconds, that the port waits before retransmitting the initial PDUs to the RADIUS server:

    Aggregation device 1:

  7. Configure the server timeout interval. The server timeout interval is the amount of time, in seconds, that the interface will wait for a reply from the authentication server. If a reply is not received within the server timeout interval, the server fail action is invoked.

    The server timeout interval is set to 5 seconds in this step.

    Aggregation device 1:

  8. Specify the guest VLAN. The guest VLAN is used in this example to provide limited network access for non-authenticated 802.1X supplicants.

    This example assumes that a guest VLAN named DEFAULT_USERS has already been configured. For additional information on guest VLANs, see Understanding Guest VLANs for 802.1X on Switches.

    Aggregation device 1:

  9. Configure the server reject VLAN. The server reject VLAN is used to provide limited network access for supplicants that fail 802.1X authentication.

    This example assumes that a server reject VLAN named FAIL_AUT has already been configured. For additional information on server reject VLANs, see Understanding Server Fail Fallback and Authentication on Switches.

    Aggregation device 1:

  10. Commit the configuration. Commitment synchronization will commit the configuration on both aggregation devices.

Enabling Loop Detection and Prevention

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

There are many technologies that can help detect and prevent loops in a Junos Fusion Enterprise. See Understanding Loop Detection and Prevention on a Junos Fusion.

In this example, loop detection is enabled on all extended ports. You can configure loop detection in a Junos Fusion to detect accidental loops caused by faulty wiring or by VLAN configuration errors. When loop detection is enabled on an extended port, the port periodically transmits a Layer 2 multicast packet—in this example, the packet is sent using the default interval of 30 seconds—with a user-defined MAC address. If the loop detection packet detects an error on an extended port interface in the Junos Fusion topology, the ingress interface is logically shut down and a loop detect error is flagged.

RSTP is used in this topology to prevent loops on the network ports—the non-cascade ports that send and receive network traffic—on the aggregation devices.

To enable loop prevention and RSTP:

  1. Create the configuration groups for loop prevention:

    Aggregation device 1:

    Aggregation device 2:

  2. Enable loop detection on all extended ports in the Junos Fusion Enterprise:

    Aggregation device 1:

    Because no loop detection transmit interval timer is set, a loop detection packet is sent at the default interval of every 30 seconds.

  3. Specify the MAC address to use in the loop detection packet:

    Aggregation device 1:

    Any unique MAC address can be specified as the MAC address in this step.

  4. Specify the network ports—the non-cascade ports that send and receive network traffic—on the aggregation devices that will enable RSTP.

    This step assumes ae2 and ae3 are configured on both aggregation devices, and that matching RSTP configuration is desired on the interfaces on both aggregation devices. See Configuring Aggregated Ethernet Links (CLI Procedure).

    Aggregation device 1:

  5. Configure the RSTP system identifier. The RSTP system identifiers is used to identify RSTP instances.

    Aggregation device 1:

  6. Set the RSTP bridge priority. The bridge priority in RSTP is used by the spanning tree algorithm to determine the root bridge in the spanning tree instance.

    Setting the bridge priority to 0 ensures the aggregation devices assume the root bridge role in the spanning tree instance.

    Aggregation device 1:

  7. Commit the configuration. Commitment synchronization will commit the configuration on both aggregation devices.

Enabling Power over Ethernet, LLDP, and LLDP-MED

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them in a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode.

Aggregation device 1:

Aggregation device 2:

Step-by-Step Procedure

This procedure shows how to enable Power over Ethernet (PoE), LLDP, and LLDP-MED for this topology, without using configuration groups.

Note

These instructions assume the PoE firmware versions were checked and updated as described in Preparing the Satellite Devices.

  1. Enable PoE on all extended ports that support PoE:

    Aggregation device 1:

    [edit]

    user@ad1-ex9208# set poe interface all-extended

    Aggregation device 2:

    [edit]

    user@ad2-ex9208# set poe interface all-extended

    The default values for PoE - class management mode, 30W of maximum power per PoE interface, 0 W reserved for guard band - are unchanged in this configuration.

  2. Enable LLDP on all extended ports:

    Aggregation device 1:

    [edit]

    user@ad1-ex9208# set protocols lldp interface all

    Aggregation device 2:

    [edit]

    user@ad2-ex9208# set protocols lldp interface all

    Note

    LLDP is enabled by default. This configuration step is added to ensure LLDP is enabled on all extended ports.

  3. Enable LLDP-MED on all extended ports:

    Aggregation device 1:

    [edit]

    user@ad1-ex9208# set protocols lldp-med interface all

    Aggregation device 2:

    [edit]

    user@ad2-ex9208# set protocols lldp-med interface all

    Note

    LLDP-MED is enabled by default. This configuration step is added to ensure LLDP-MED is enabled on all extended ports.

Results

From configuration mode, confirm your PoE configuration by entering the show poe statement from each aggregation device.

Aggregation device 1:

Aggregation device 2:

Confirm your LLDP and LLDP-MED configuration by entering the show protocols statement from each aggregation device.

Aggregation device 1:

Aggregation device 2:

Verification

Confirm that the configuration is working properly.

Verifying that the Satellite Devices are Online

Purpose

Verify that the satellite devices in the Junos Fusion are active.

Action

Enter the show chassis satellite command from either aggregation device:

Meaning

The Junos Fusion Enterprise topology is properly configured. The device state for each satellite device in the Junos Fusion Enterprise topology is online, as is the port state for each cascade port.

Verifying that PoE is Enabled

Purpose

Verify that PoE is enabled on the satellite device’s extended ports

Action

Confirm PoE is enabled on individual interfaces by entering the show poe interface command from either aggregation device.

Meaning

The show poe interface output confirms that the admin status is Enabled for all interfaces on the satellite devices. The satellite device interfaces can be identified by the FPC ID number; in this output, the PoE status for interfaces on FPC 101 and 102 is shown.

Verifying that LLDP and LLDP-MED are Enabled

Purpose

Verify that LLDP and LLDP-MED are enabled.

Action

Confirm LLDP and LLDP-MED are enabled by entering the show lldp command on either aggregation device:

Meaning

The output confirms that LLDP and LLDP-MED are enabled.

Verifying that VLANs are Operational

Purpose

Verify the VLANs exist and are associated with the correct interfaces.

Action

Enter the show vlans command on either aggregation device:

Meaning

The output confirms that VLAN v10 is present on the aggregation device, and the correct interfaces are associated with the VLAN.

Verifying the Aggregated Ethernet Interface Connecting Aggregation Device 2 to Branch Office 2 is Online

Purpose

Verify that the aggregated ethernet interface connecting aggregation device 2 to branch office 2 is up and operational.

Action

Confirm the aggregated ethernet interface is up by entering the show interfaces ae0 brief command:

Meaning

The output confirms that the aggregated ethernet interface is enabled and that the physical link is up.