• Software Redundancy for MC-LAG

    For Inter-Chassis Control Protocol (ICCP) requirements, peering with the ICCP peer loopback IP address is recommended to avoid any direct link failure between multichassis link aggregation group (MC-LAG) peers.

    Learn More

  • Software Redundancy for MC-LAG

    Inter-Chassis Control Protocol (ICCP) is used to signal multichassis aggregated Ethernet (MC-AE) link state between the multichassis link aggregation group (MC-LAG) peers.

    Learn More

  • Between the DC Edge and DC Core

    For better convergence, configure the hold-down timer higher than the Bidirectional Forwarding Detection (BFD) timer (one second).

    Learn More

  • Between the DC Edge and DC Core

    When a failover occurs, the secondary node must announce to the peer device that it is now owner of the MAC address associated with the redundant Ethernet interface (redundant Ethernet MAC is shared between nodes) using gratuitous ARP.

    Learn More

  • Between the DC Core and DC PODs

    When configuring multichassis aggregated Ethernet (MC-AE) interface parameters, status-control must be active on one provider edge router and standby on the other node.

    Learn More

  • Between the DC Core and DC PODs

    When configuring MC-AE interface parameters, both nodes can be configured as prefer-status-control-active to get better convergence on node reboot scenarios to ensure that the ICCP session goes down due to physical link failures.

    Learn More

  • Between the DC Core and DC PODs

    You must configure the same unique network-wide configuration for a service in the set of provider edge routers providing the service. This service ID is required if the MC-AE interfaces are part of a bridge domain.

    Learn More

  • Software Redundancy for QFabric-M Configuration

    NSB operates by synchronizing all protocol information for NSB-supported Layer 2 protocols between the master and backup Routing Engines. If the switch has a Routing Engine switchover, the NSB-supported Layer 2 protocol sessions remain active.

    Learn More

  • Configuring Chassis Clustering Data Interfaces

    Redundant Ethernet interface link aggregation groups (LAGs) are configured toward the edge firewall and core switch.

    Learn More

  • Configuring Network Address Translation

    When destination Network Address Translation (NAT) is performed, the destination IP address is translated according to configured destination NAT rules and then security policies are applied.

    Learn More

  • Configuring Intrusion Detection and Prevention

    There can be only one active intrusion detection and prevention (IDP) policy. The active IDP policy can be applied to multiple rules.

    Learn More

  • Configuring Compute Hardware

    When configuring the IBM System x3750 M4 in the OOB role, each server has four 10-Gigabit Ethernet NIC ports connected to the QFX3000-M QFabric system as a data port for all VM traffic. LAG provides each system, connected to each POD, switching redundancy in case of a POD failure.

    Learn More

  • Virtualization

    Link Aggregation Control Protocol (LACP) can only be configured via the vSphere Web Client.

    Learn More

  • Mounting Storage Using the iSCSI Protocol

    When mounting storage using the Internet Small Computer System Interface protocol (iSCSI), the ESXi host must have permission to access the storage array.

    Learn More

  • Configuring EMC Storage

    When configuring EMC storage, once Storage Groups is enabled for a storage system, any host connected to the storage system will no longer be able to access storage data. For the host to access the storage data, add LUNs to Storage Group and then connect the host to the Storage Group.

    Learn More

  • Load Balancing and the Link and Network

    When configuring the link and network, for external connections, static routes are advertised from the core switch for VIPs configured in F5 for clients in the Internet to send requests to the VIP for specific services like Exchange, Wikimedia, and SharePoint.

    Learn More

  • Load Balancing and the VIP and Server Pool

    When configuring virtual server IP address (VIP) and server pools, it is recommended to use the nPath template in the F5 configuration GUI (Template and wizards window) in Direct Server Return (DSR) mode.

    Learn More

  • Converting Devices to Participate in VCF

    The spine device that has been powered on the longest of all spine devices assumes the master Routing Engine role; the spine device that has been powered on the second- longest assumes the backup Routing Engine role. Remaining spine devices assume the line card role.

    Learn More

  • Connecting VCF Devices

    When you interconnect the et-fpc/pic/port interfaces between devices configured to participate in a Virtual Chassis Fabric (VCF), the ports automatically convert to Virtual Chassis ports (VCP).

    Learn More

  • OSPF and LLDP for VCF

    When you enable LLDP on VCF members, interfaces may become VCP ports automatically between connected members. Network Director requires LLDP be enabled on network ports for orchestration services, and no other devices connected to the preprovisioned VCF after initial setup.

    Learn More

  • Net. Director 1.6 for Orchestrating VCF

    For Network Director 1.6, if Link Layer Discovery Protocol (LLDP) is not running, use vCenter WebGUI 5.1 to configure LLDP.

    Learn More

  • Net. Director 1.6 for Orchestrating VCF

    For Network Director orchestration services, the process to create the group configuration and push it to the VCF takes a while, and the completion time varies depending on how many servers and VMs exist in the network.

    Learn More

  • VMware NSX

    When configuring VMware NSX for MAC address learning, if your environment is 100 percent virtualized, you should use either unicast or hybrid mode.

    Learn More

  • Configuring VXLAN Transport

    A transport zone is an abstract zone that defines how VMware NSX handles MAC address learning. A single transport zone is sufficient for a small or medium enterprise private cloud. However, to build a scale-out architecture, it is a good idea to create one transport zone per POD.

    Learn More

  • Configuring a VXLAN Segment ID

    If you plan to implement Segment ID pools in an NSX manager production environment, you need to create a large range of multicast address pools.

    Learn More

  • Configuring the VMware NSX Edge Gateway

    For New NSX Edge deployment options, check the VMware NSX documentation to see which appliance size suites your production data center depending on the scale and performance.

    Learn More

  • Virtual Chassis Fabric Overview

    Only two devices can simultaneously operate in the Routing Engine role within a Virtual Chassis Fabric (VCF). A VCF supports up to 4 spine devices. When a VCF has 3 or more spine devices, devices that are not operating in the Routing Engine role operate in the line card role.

    Learn More

  • Virtual Chassis Fabric Overview

    A spine device that is configured into the Routing Engine role but is operating in the line card role assumes the Routing Engine role when an active Routing Engine fails.

    Learn More

  • Master Routing Engine Election Process

    Configure the mastership priority of the QFX5100 devices in your Virtual Chassis Fabric (VCF) to ensure that the correct devices assume their intended roles when you configure your VCF using a nonprovisioned configuration.

    Learn More

  • Fabric Mode

    In autoprovisioned configurations, a spine device is not participating as a VCF member until it is configured into fabric mode. A spine device that is not configured into fabric mode is configured into fabric mode when it is interconnected into the VCF.

    Learn More

  • Hardware Requirements for VCF

    You can configure any combination of QFX5100, QFX3600, QFX3500, or EX4300 devices into leaf devices within your Virtual Chassis Fabric (VCF).

    Learn More

  • VCF License Requirements

    For a Virtual Chassis Fabric (VCF) deployment, two license keys are recommended for redundancy—one for the device in the master Routing Engine role and the other for the device in the backup Routing Engine role.

    Learn More

  • Logging into a VCF

    The recommended method of logging in to a Virtual Chassis Fabric (VCF) is through the use of a Virtual Management Ethernet (VME) interface. The VME interface is a logical interface representing all of the out-of-band management ports on the member devices.

    Learn More

  • QFX Series VCFs

    The optimal VCF topology is to use QFX5100 devices only. A VCF composed entirely of QFX5100 devices supports the largest breadth of features at the highest scalability while also supporting the highest number of high-speed interfaces.

    Learn More

  • Autoprovisioning a Virtual Chassis Fabric

    A spine device whose fabric or mixed mode setting is improperly set cannot join a Virtual Chassis Fabric (VCF). You can check the mode settings by using the show virtual-chassis mode command.

    Learn More

  • Autoprovisioning a VCF

    You can use the request virtual-chassis mode fabric local or request virtual-chassis mode mixed local commands to set a spine device into fabric or mixed mode after interconnecting your Virtual Chassis Fabric (VCF).

    Learn More

  • Autoprovisioning a VCF

    Fabric and mixed mode settings are automatically updated for a leaf device when it is interconnected into an autoprovisioned VCF. If fabric or mixed mode settings are changed when a device is interconnected into a VCF, the leaf device reboots before joining the VCF.

    Learn More

  • Autoprovisioning a VCF

    Mixed mode and fabric mode are checked and set automatically on the device. If mixed or fabric mode has to be changed to become part of the VCF, the device reboots. The device participates in the VCF with no further user intervention after this reboot is complete.

    Learn More

  • Preprovisioning a VCF

    The automatic Virtual Chassis port (VCP) conversion feature is enabled and automatically configures SFP+ and QSFP+ interfaces into VCPs when the VCF configuration mode is set to preprovisioned. You do not need to manually configure VCPs.

    Learn More

  • Preprovisioning a VCF

    If you want to configure an SFP+ or QSFP+ interface into a network interface, disable LLDP on that interface.

    Learn More

  • Configuring a Nonprovisioned VCF

    A spine device that is not selected as master or backup Routing Engine assumes the line card role. You should configure the spine devices with a higher mastership priority value than the leaf devices to assure a spine device assumes the Routing Engine role.

    Learn More

  • Traceoptions (Virtual Chassis) Configuration Statement

    The all flag displays a subset of logs that are useful in debugging most issues. For more detailed information, use all detail.

    Learn More

  • Virtual Chassis Physical Connections

    Virtual Chassis technology does not require cable connections to be in the form of a ring. However, it is highly recommended that you close the loop with a ring configuration to provide resiliency.

    Learn More

  • Virtual Chassis Physical Connections

    You can bundle extended Virtual Chassis connections into a single logical group to provide more Virtual Chassis bandwidth and resiliency on supported Junos releases.

    Learn More

  • Virtual Chassis Implementation

    Preprovisioned Virtual Chassis offers additional benefit of allowing nonstop software upgrade (NSSU) on supported models.

    Learn More

  • Virtual Chassis Physical Connections

    By default, the actor and partner send Link Aggregation Control Protocol (LACP) packets every second (fast mode). The interval can be fast (every second) or slow (every 30 seconds).

    Learn More

  • Network Topology (Logical Topology)

    For additional Routed VLAN Interfaces (RVIs), just increase the unit number. The unit number can be arbitrary and does not have to be sequential. However, it is recommended that the RVI unit number match the VLAN-ID.

    Learn More

  • Network Topology (Logical Topology)

    To configure an IPv6 address, use family inet6.

    Learn More

  • Network Topology (Logical Topology)

    An interface cannot be configured for both root protection and loop protection at the same time.

    Learn More

  • Ethernet Switching

    An asterisk (*) denotes a port is active (link up) in the show VLANS operational command output.

    Learn More

  • Ethernet Switching

    An optional command allows LLDP-MED to advertise the QoS code-point associated with the configured forwarding-class when enabled. To advertise the proper QoS code point, a behavior aggregate (BA) must be bound to the interface.

    Learn More

  • Ethernet Switching

    Multiple member ranges, members, or a combination of both can be configured under the same interface-range group.

    Learn More

  • EX Series Features

    To manually configure a port to be part of a MVRP-learned VLAN, the corresponding VLAN-ID needs to be manually configured on the switch.

    Learn More

  • EX Series Features

    The concept of untrusted and trusted ports on dynamic ARP inspection (DAI) and IP source guard is the same as with the Dynamic Host Configuration Protocol (DHCP) snooping feature.

    Learn More

  • EX Series Features

    A typical form of IP spoofing is a Denial of Service (DoS) attack, where the attacker floods a target with TCP SYN packets in an attempt to overwhelm the device while hiding the actual source of the attack.

    Learn More

  • EX Series Features

    “P” models on EX3200 and EX4200 lines provide support for enhanced Power over Ethernet (PoE), up to 18.6 watts at power sourcing equipment (PSE) with supported Junos releases. “PX” models on EX3200 and E4200 support PoE+.

    Learn More

  • EX Series Features

    Power over Ethernet (PoE) is enabled by default on the fixed-configuration EX Series switches that support PoE. You can activate PoE simply by connecting powered devices (PDs) to the powered ports.

    Learn More

  • EX Series Features

    The default Power over Ethernet (PoE) management mode is static. For the EX2200, it is recommended that the mode be changed from static to class.

    Learn More

  • EX Series Features

    The sFlow sample limit of 300 packets/second is defined by the switch and is not user-configurable.

    Learn More

  • Configuring EVPN

    The multi-homing mode of all-active is configured indicating that both multi-homed links between the CE and the PEs are always active. Single-active mode is also supported where only one multi-homed link is active at any given time.

    Learn More

  • Configuring EVPN

    In real deployment scenarios, the use of route reflectors in a redundant configuration is recommended.

    Learn More

  • Configuring EVPN

    Set the Route Distinguisher (RD) to a network-wide unique value to prevent overlapping routes between different EVIs. We recommend you use a Type 1 RD where the value field is comprised of an IP address, typically the loopback address, of the PE followed by a number unique to the PE.

    Learn More

  • Configuring EVPN

    The access interface is configured as a link aggregation group (LAG) with a single link member. The reason is that it is desirable to enable LACP at the access layer to control initialization of the interface.

    Learn More

  • Configuring EVPN

    Configure the same IP and MAC address on all PEs for a given EVPN VLAN to simplify the configuration, reduce control plane overhead, and minimize the recovery time in the event a PE node fails.

    Learn More

  • Configuring EVPN

    Configure the VLAN-aware bundle service even if the EVPN Instance (EVI) is mapped to a single VLAN. This service provides the most flexibility and allows for an easier transition in cases where changes to the service, such as adding more VLANs to the EVI, can be made in the future.

    Learn More

  • Verification

    The equivalent command for an EVPN Instance (EVI) configured as a Virtual Switch is show bridge mac-table bridge-domain instance .

    Learn More

  • Learn About: Data Center Bridging

    Data Center Bridging (DCB) relieves the effects of network congestion by using queue management techniques to prevent queue overflow, and thus frame drops, and bandwidth allocation enhancements to utilize port bandwidth as efficiently as possible.

    Learn More

  • Learn About: Data Center Bridging

    FCoE is native Fibre Channel frames encapsulated in Ethernet. The Ethernet network uses the Ethernet frame headers to forward and handle traffic appropriately.

    Learn More

  • Learn About: Data Center Bridging

    The Data Center Bridging (DCB) extensions to Ethernet standards support not only the transport of storage traffic such as FCoE and iSCSI, but also the transport of any traffic that requires lossless handling.

    Learn More

  • Learn About: Data Center Bridging

    Priority-based flow control (PFC), enhanced transmission selection (ETS), and Data Center Bridging Capability Exchange protocol (DCBX) are mandatory to support lossless transport over Ethernet. Quantized Congestion Notification (QCN) is optional and is rarely implemented.

    Learn More

  • Learn About: Data Center Bridging

    The code-point values identify traffic by priority, and all traffic on a link that requires the same treatment should use the same priority.

    Learn More

  • Learn About: Data Center Bridging

    Traffic that is not paused behaves as normal best-effort Ethernet traffic.

    Learn More

  • Learn About: Data Center Bridging

    Devices that support priority-based flow control (PFC) must have port buffers that are deep enough to store frames while the flow is paused.

    Learn More

  • Learn About: Data Center Bridging

    Priority-based flow control (PFC) must be configured on all of the device interfaces in the path of the flows that you want to be lossless.

    Learn More

  • Learn About: Data Center Bridging

    Priorities in a priority group should have similar traffic handling requirements with respect to latency and frame loss.

    Learn More

  • Learn About: Data Center Bridging

    Quantized Congestion Notification (QCN) works best in situations when congestion is sustained for relatively long periods of time.

    Learn More

  • Learn About: Data Center Bridging

    Because Data Center Bridging Capability Exchange protocol (DCBX) is an extension of Link Layer Discovery Protocol (LLDP), if you disable LLDP on an interface DCBX cannot run on that interface.

    Learn More

  • Learn About: Data Center Bridging

    Not all Data Center Bridging (DCB) feature configurations must match to ensure lossless transport.

    Learn More

  • Learn About: Data Center Bridging

    Without proper buffer management, priority-based flow control (PFC) does not work, because if buffers overflow, frames drop, and transport is not lossless.

    Learn More

  • Learn About: Data Center Bridging

    Interfaces on which Data Center Bridging Capability Exchange protocol (DCBX) is enabled automatically negotiate the priority-based flow control (PFC) and enhanced transmission selection (ETS), administrative state and configuration with the directly connected peer.

    Learn More

  • Learn About: Data Center Bridging

    Juniper Networks data center switches support two lossless priorities (classes of traffic) by default, and can support up to six lossless priorities.

    Learn More

  • Learn About: Data Center Fabric Fundamentals

    In data center fabrics, all connections are always active to provide multiple paths for all traffic – hence, there is minimal latency and maximum bandwidth.

    Learn More

  • Learn About: Data Center Fabric Fundamentals

    Data center fabric architectures typically use only one or two tiers of switches. This is key to a fabric’s efficiency.

    Learn More

  • Learn About: Data Center Fabric Fundamentals

    The data center fabric architecture model unites all data center resources from processor cores to memory in a flattened plane: servers, storage, the network, and peripherals.

    Learn More

  • Learn About: Data Center Fabric Fundamentals

    Use of high bandwidth switches to interconnect the devices allows for faster I/O speeds that support converged data and storage traffic that were previously isolated.

    Learn More

  • Learn About: Data Center Fabric Fundamentals

    Fabric’s fundamental support for virtualization and its shared resources can adjust to the dynamic requirements of the applications that utilize them.

    Learn More

  • Virtual Chassis (VC) vs. Virtual Chassis Fabric (VCF) poster

    What Are Virtual Chassis and Virtual Chassis Fabric? A Virtual Chassis scales to ten devices within a single virtual chassis yet operates as a single device on a rack or in different locations connected by Ethernet. Virtual Chassis Fabric is optimized for small data centers that could be as large as 30 racks but it is optimized with a “single point of management” for the entire fabric.

    Learn More