Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Example: Configuring Layer 2 and Layer 3 Network Services for the Midsize Enterprise Campus

 

This example details the configuration for BGP and OSPF routing, as well as multicast and DHCP relay for campus networks. This is based on a validated design architecture.

Requirements

Table 1 shows the hardware and software requirements for this example.Table 2 shows the scaling and performance targets used for this example.

Table 1: Hardware and Software Requirements

Hardware

Device Name

Software

MX240

cs-edge-r01, cs-edge-r02

13.2 R2.4

SRX650

cs-edge-fw-01, cs-edge-fw02

12.1 X44-D39.4

EX9200/9250

cs-core-sw01, cs-core-sw02

13.2 R3.7

EX4600

cs-agg-01

12.3 R3.4

EX2300

cs-2300-ab5

12.3 R3.4

EX3400

cs-3400-ab4

12.3 R3.4

EX4300

cs-4300-ab1

12.3 R3.4

EX4300

cs-4300-ab2, cs-4300-ab3

13.2 X51-D21.1

Table 2: Node Features and Performance/Scalability

Node

Features

Performance/Scalability Target Value

Edge (MX240, SRX650)

MC-LAG, OSPF, BGP, IRB

3k IPv4

Core (EX9200/EX9250 )

VLANs, MC-LAG, LAG, IGMP snooping, OSPF, PIM-SM, IGMP, DHCP relay, IRB

3k IPv4 routes

128k MAC table entries

16k ARP entries

Aggregation (EX4600)

VLANs, LAG, IGMP snooping, OSPF, PIM-SM, IGMP, DHCP relay, RVI

3k IPv4 routes

5 IGMP groups

Access (EX3400, EX4300)

VLANs, LAG, 802.1X, IGMP snooping, DHCP snooping, ARP inspection, IP source guard

55k MAC table entries

13k 8021.x users

5 IGMP groups

The configuration details that follow assume that:

  • All physical cabling necessary has been completed.

  • All basic logical interfaces have been configured.

  • All devices have loopback interfaces configured.

Overview

This configuration example details advanced Layer 2 and Layer 3 connectivity that has been validated to support a modern enterprise campus. The campus is designed to scale and facilitate connectivity for an assortment of wired devices to the network.

Topology

The midsize enterprise campus design is comprised of separate modules: edge, core, access, and aggregation. After the Layer 3 interfaces have been configured, the dynamic routing protocols can then be provisioned. BGP is used at the edge, and OSPF is used inside the campus network. Figure 1 shows the routing topology used.

Figure 1: Routing Topology Diagram
Routing Topology Diagram

Configuration

To configure Layer 2 and Layer 3 network services for the midsize enterprise campus, perform these tasks:

Configuring Layer 3 Interfaces for the Midsize Enterprise Campus

Step-by-Step Procedure

To configure Layer 3 interfaces for the midsize enterprise campus, follow these steps:

  1. Configure bridge-domains on edge devices.

    This configuration was used for cs-edge-r01 and cs-edge-r02.

    In this example, the SRX cluster sends traffic to either cs-edge-r01 or cs-edge-r02 using the IRB 601 MAC address for routing the packet. (The IRB 601 MAC address on cs-edge-r01 is different than the IRB 601 MAC address on cs-edge-r02.) The reth1 interface on the SRX cluster is a single LAG. The LAG address hashing results in a packet destined to the cs-edge-r01 MAC address being sent to cs-edge-r02. In an MC-LAG configuration, MAC address learning does not occur on the ICL link. As a result, cs-edge-r02 floods the packet on VLAN 601.

    To avoid flooding on VLAN 601, we specified the MAC address for cs-edge-r01 in the static-macoption on cs-edge-r02 and vice versa. Now when a packet destined to cs-edge-r01 arrives at cs-edge-r02, cs-edge-r02 sends the packet to cs-edge-r01 using the static MAC address configured.

  2. Configure IRB interfaces on edge devices for dynamic routing.

    This configuration was used for cs-edge-r01 and cs-edge-r02.

  3. Configure VLAN interfaces on the core.

    This configuration was used for cs-core-sw01 and cs-core-sw02.

    Note

    The static-macoption on firewall-trust VLAN 600 prevents traffic arriving from the SRX cluster from flooding the VLAN.

    The SRX cluster sends traffic to both core switches using the IRB 600 MAC address for routing the packet. The IRB 600 MAC address is different on cs-core-sw1 from cs-core-sw2. Because the reth0 interface on the chassis cluster is a single LAG, the reth0 LAG address hashing results in a packet destined to the cs-core-sw1 MAC address being sent to cs-core-sw2. In an MC-LAG configuration, MAC address learning does not occur on the ICL link. As a result, cs-core-sw2 floods the packet on VLAN 600.

    To avoid flooding on VLAN 600, we specifed the MAC address for cs-core-sw1 in the static-macoption on cs-core-sw2 and the MAC address for cs-core-sw2 on cs-core-sw1 and vice versa. Now when a packet that was destined to cs-core-sw1 arrives at cs-core-sw2, cs-core-sw2 sends the packet to cs-core-sw1 using the static MAC address.

  4. Configure IRB interfaces on the core for dynamic routing.

    This configuration was used for cs-core-sw01 and cs-core-sw02.

  5. On the core devices, create client VLANs that map to the access.

    This example configuration is shown for one client VLAN. Configure this for all relevant client VLANs in your campus.

  6. Create the IRB interface in the VLAN.
  7. Configure the IRB routing interface for client VLANs.
  8. Add voice VLAN ports to access switches where needed.
    • On access devices that are EX4300, EX3400, and EX2300 switches the configuration is as follows:

    • On access devices that are EX4300 switches the configuration is as follows:

Configuring DHCP Relay in Midsize Enterprise Campus

Step-by-Step Procedure

DHCP relay is set up in order to properly support DHCP clients downstream.

To configure DHCP relay:

  1. Configure DHCP relay in the aggregation.
  2. Configure DHCP relay in the core.

    The same configuration is placed on core-sw-02.

    Note

    Configure dhcp-relay group all interface on all IRB interfaces in the core on both devices.

Configuring Multicast on Core Devices

Step-by-Step Procedure

To enabled multicast in the campus, multicast must be configured on the core, aggregation, and access.

To configure multicast on core devices:

  1. Enable tunnel services on campus core device core-sw01.
  2. Configure core device core-sw01 as the primary rendezvous point (RP).
    Note

    A higher number priority setting indicates the devices as the primary RP in the bootstrap configuration.

  3. Configure PIM on all Layer 3 and IRB interfaces on core-sw01.
  4. Configure IGMP on core-sw-01.

    Configure IGMP query settings.

    Configure IGMP snooping on all VLAN interfaces.

    Enable IGMP on IRB interfaces.

    Note

    At the global level, IGMP join and leave messages are replicated from the active link to the standby link of an MC-LAG interface, which enables faster recovery of membership information after failover. This command synchronizes multicast state across MC-LAG neighbors when bridge domains are configured.

  5. Enable tunnel services on campus core device core-sw02.

  6. Configure core device core-sw02 as the secondary RP.
    Note

    A lower priority setting indicates this devices is the secondary RP in the bootstrap configuration.

  7. Configure PIM on all Layer 3 and IRB interfaces on core-sw02.
  8. Configure IGMP on core-sw-02.

    Configure IGMP query settings.

    Enable IGMP on all VLAN interfaces.

    Enable IGMP on IRB interfaces.

Configuring Multicast on Aggregation and Access Devices

Step-by-Step Procedure

To enable multicast in the campus, multicast must be configured on the core, aggregation, and access.

To configure multicast on the aggregation and access devices (cs-agg-01, cs-4300-ab1, cs-4300-ab2, cs-4300-ab3, cs-3400-ab4, and cs-2300-ab5):

  1. Enable PIM.
  2. Enable IGMP on all relevant VLANs connected to multicast clients.

    The following example is for one RVI:

    Note

    Enable IGMP on all RVIs.

  3. Enable IGMP snooping on all VLAN interfaces.

Configuring BGP Routing

Step-by-Step Procedure

The edge layer of the campus is defined where the ISP handoff occurs. Here, open standard EBGP is configured with two different ISP connections for ISP1 and ISP2, which are connected to cs-edge-r01 and cs-edge-r02, respectively. In this example, cs-edge-r01 and cs-edge-r02 peer to each other using IBGP with an export policy to enable next-hop self. The BGP local preference has been configured to prefer the ISP1 gateway connected to cs-edge-r01.

Client device Internet access is provided using source NAT on the edge firewall and forwarded to the edge routers for Internet access to service provider networks. Remote access users connecting from the Internet will use the public IP address of the VPN gateway, so the appliance hosting the gateway IP subnet is advertised to the Internet using the export policy from the edge routers. To support redundancy, each edge router is advertising the same prefix into the Internet.

To configure BGP routing:

  1. Configure cs-edge-r01 interface to connect to ISP1.
    Note

    The hold-time setting has been tuned on this interface to get better convergence. If this is not added, higher convergence might be observed because this interface could be waiting to receive traffic while the underlying MC-LAG has not yet converged.

  2. Configure BGP on cs-edge-r01.
  3. Configure IBGP peering on cs-edge-r01 to cs-edge-r02.
  4. Configure next-hop self.
  5. Configure the routing policy on cs-edge-r01 for remote access.
  6. Configure the next-hop self policy.
  7. Configure BGP on cs-edge-r02.
  8. Configure IBGP peering on cs-edge-r02 to cs-edge-r01.
  9. Configure the remote access policy on cs-edge-r02.
  10. Configure the next-hop self policy on cs-edge-r02.

Configuring OSPF Routing for the Midsize Enterprise Campus

Step-by-Step Procedure

This solution uses OSPF as the IGP protocol because of the widespread familiarity of the protocol.

Key configuration parameters:

  • Two OSPF areas (area 0 and area 1) are configured to localize the failure with the area boundary.

  • Edge routers and firewalls are configured with MC-LAG and IRB (Layer 3) interfaces in area 1 .

  • The link between core devices is in area 0.

  • The link between core devices, aggregation device, and WAN are in area 0.

  • Each core switch and edge router is to be configured with an OSPF priority of 255 and 254 to strictly enforce that the core and edge devices always become the designated router and backup designated router for that bridge domain.

  • All IRBs and VRRP addresses are advertised into OSPF as passive so that sessions do not get established.

  • Conditional-based default aggregate routes from edge routers are redistributed towards the core and other devices to connect to the Internet.

  • LFA is configured on all OSPF links to improve convergence.

To configure OSPF routing:

  1. Enable LFA on OSPF links.

    The following command should be configured on all devices that will participate in OSPF.

    The IRB participating in OSPF should also be set to LFA.

  2. Configure per-packet load balancing to allow the Packet Forward Engine to retain the LFA backup next hops.
  3. Configure OSPF on edge devices, cs-edge-r01 and cs-edge-r02.
  4. Enable the BFD protection IRB routing interface on cs-edge-r01 and cs-edge-r02.
  5. Configure the condition policy for the OSPF default route based on the BGP route on cs-edge-r01 and cs-edge-r02.
  6. Configure OSPF on the edge firewall devices.
  7. Export the subnet used for source NAT to the edge firewall.
  8. Configure OSPF on the aggregation device.

Verification

Confirm that the configuration is working properly.

Verifying BGP Routing on Edge Devices

Purpose

Verify that BGP routing is configured properly and running on the edge devices.

Action

  • Check the BGP summary table on edge devices.

    root@cs-edge-r01# run show bgp summary
    root@cs-edge-r02# run show bgp summary
  • Verify the routing table on cs-edge-r01. Check that the ISP1 route advertisement is received in the route table. Check that the ISP1 route is advertised as well.

    root@cs-edge-r01# run show route receive-protocol bgp 192.168.168.6
    root@cs-edge-r01# run show route advertising-protocol bgp 192.168.168.6
  • Verify the routing table on cs-edge-r02. Check that the ISP2 route advertisement received is in the route table. Check that the ISP2 route is advertised as well.

    root@cs-edge-r02# run show route receive-protocol bgp 192.168.168.10
    root@cs-edge-r02# run show route advertising-protocol bgp 192.168.168.10

Meaning

Confirm that dynamic routing protocols are running and that static and dynamic routes are properly learned and advertised.

Verifying OSPF Routing

Purpose

Verify that OSPF routing and LFA has been properly configured on devices.

Action

  • Verify that all OSPF sessions are up.

    root@cs-core-sw01# run show ospf neighbor
    root@cs-core-sw02# run show ospf neighbor
    root@cs-agg-01# run show ospf neighbor
    root@cs-edge-fw01-node0# run show ospf neighbor
    root@cs-edge-r01# run show ospf neighbor
    root@cs-edge-r02# run show ospf neighbor
  • Verify the OSPF conditional-based default route advertisement into OSPF.

    root@cs-core-sw01# run show route 0.0.0.0
    root@cs-edge-fw01-node0# run show route 0.0.0.0
    root@cs-edge-r01# run show route 0.0.0.0
  • Verify the OSPF LFA routes.

    root@cs-core-sw01# run show ospf backup coverage
    root@cs-agg-01# run show ospf backup coverage

Meaning

Confirm the OSPF is configured properly and advertising routes from IBGP. LFA is enabled and working properly.

Verifying DHCP Relay

Purpose

Verify that DHCP relay has been properly configured and enabled on devices.

Action

  • Verify DHCP relay information on aggregation device.

    root@cs-agg-01# run show helper statistics
  • Verify DHCP relay on the core devices.

    root@cs-core-sw01# run show dhcp relay binding summary

Meaning

Confirm that DHCP relay is configured properly and has been enabled.

Verifying Multicast in the Midsize Enterprise Campus

Purpose

Verify that multicast has been properly configured on devices.

Action

  • Verify multicast routing on the aggregation device.

    root@cs-agg-01# run show multicast route
  • Verify PIM neighbors and PIM interfaces on the aggregation device.

    root@cs-agg-01# run show pim neighbors
    root@cs-agg-01# run show pim interfaces
  • Verify PIM rendezvous points on the aggregation device.

    root@cs-agg-01# run show pim rps
  • Verify IGMP interfaces and IGMP snooping on the aggregation device.

    root@cs-agg-01# run show igmp interface
    root@cs-agg-01# run show igmp-snooping membership
  • Verify multicast routing on the core devices.

    root@cs-core-sw01# run show multicast route
  • Verify PIM neighbors, PIM interfaces, and PIM joins on the core devices.

    root@cs-core-sw01# run show pim neighbors
    root@cs-core-sw01# run show pim interfaces
    root@cs-core-sw01# run show pim join
  • Verify PIM rendezvous points on the core devices.

    root@cs-core-sw01# run show pim rps
  • Verify IGMP interfaces, IGMP groups, and IGMP snooping on the core devices.

    root@cs-core-sw01# run show igmp interface
    root@cs-core-sw01# run show igmp group
    root@cs-core-sw01# run show igmp snooping membership

Meaning

Confirm that multicast has been properly configured and is now enabled on all devices.