Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Tunnel Interfaces on MX Series Routers with MPC1 and MPC2

 

MPC1 and MPC2 on the MX series routers support the following tunnel interfaces:

  • gr-x/y/z—for GRE tunnels over IP

  • ip-x/y/z—for IP over IP tunnels

  • mt-x/y/z—for multicast tunnels

  • pe-x/y/z—for PIM encapsulator interface

  • pd-x/y/z—for PIM decapsulator interface

  • lt-x/y/z—for logical tunnel interface connecting logical routers

  • vt-x/y/z—for VPN tunnel interface for looping back packets from core into the PE and doing additional lookup

Note

x maps to the FPC slot, y maps to PIC slot and z maps to the port.

The following MPCs provide tunnel support parity, replacing traditional tunnel and services PICs with tunnels that were supported on a virtual port on the MX Series Packet Forwarding Engine.

  • MX-MPC1-3D (MPC1)

  • MX-MPC1-3D-Q (MPC1Q)

  • MX-MPC2-3D (MPC2)

  • MX-MPC2-3D-Q (MPC2 Q)

  • MX-MPC2-3D-EQ (MPC2 EQ

MX Series routers support a virtual PIC and a virtual port, visible for tunnel configuration, and eliminating the need for a tunnel PIC. Traditional tunnel PIC features are supported, including:

  • GRE keys

  • GRE clear-dont-fragment-bit

On MPC1 and MPC2 there are no tunnel PICs. Instead the MX router reserves some bandwidth for tunneling. In the presence of tunnel traffic, all WAN ports are affected in case of oversubscription.

You create tunnel interfaces on MX Series routers by including the following statements at the [edit chassis] hierarchy level on a particular PIC. For example

This configuration enables tunnel services with a bandwidth of 1Gbps on FPC 0 and PIC 0. With this configuration, the tunnel interfaces created are:

  • vt-0/0/0, ip-0/0/0, and so on for PIC 0

  • vt-0/1/0, ip-0/1/0 and so on for PIC 1

MPC1 and MPC2 support bandwidth of 1 Gbps and 10 Gbps. The tunnel interfaces with their associated configurations work when an MX-DPC is replaced by an MPC. The router creates tunnel devices based on the tunnel services configuration. Although the same Packet Forwarding Engine supports vt-0/0/0 and vt-0/1/0, two devices must be created to be compatible with the above configuration. The MPCs allows you to configure four tunnel MICs per MPC (to support vt-0/0/0, vt-0/1/0, vt-0/2/0, vt-0/3/0), although in reality there are only two physical MICs. This is achieved by creating logical MICs on the MPCs.

Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC1E and MPC1E-Q

The tunnel bandwidth for MPC1E and MPC1E-Q is 1Gbps or 10Gbps . However, if you do not specify the bandwidth in the configuration, it is set to 16Gbps.

Table 1 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MPC1E and MPC1E-Q .

Table 1: Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC1E and MPC1E-Q

Tunnel PIC

Maximum Bandwidth per Tunnel PIC

PFE Mapping

Maximum Tunnel Bandwidth per PFE

Maximum PFE Bandwidth

PIC0

16Gbps

PFE0

16Gbps

40Gbps

PIC1

16Gbps

PIC2

16Gbps

PIC3

16Gbps

Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC2E, MPC2E-Q, and MPC2E-EQ

The tunnel bandwidth for MPC2E, MPC2E-Q, and MPC2E-EQ is 1Gbps or 10Gbps . However, if you do not specify the bandwidth in the configuration, it is set to 16Gbps.

Table 2 shows the mapping between the tunnel bandwidth and the Packet Forwarding Engines for MPC2E, MPC2E-Q, and MPC2E-EQ .

Table 2: Packet Forwarding Engine Mapping and Tunnel Bandwidth for MPC2, /MPC2E-Q, and MPC2E-EQ

Tunnel PIC

Maximum Bandwidth per Tunnel PIC

PFE Mapping

Maximum Tunnel Bandwidth per PFE

Maximum PFE Bandwidth

PIC0

16Gbps

PFE0

16Gbps

40Gbps

PIC1

16Gbps

PIC2

16Gbps

PFE1

16Gbps

40Gbps

PIC3

16Gbps

Related Documentation