Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Example: Performing Output Scheduling and Shaping in Hierarchical CoS Queues for Traffic Routed to GRE Tunnels

This example shows how to configure a generic routing encapsulation (GRE) tunnel device to perform CoS output scheduling and shaping of IPv4 traffic routed to GRE tunnels. This feature is supported on MX Series routers running Junos OS Release 12.3R4 or later revisions, 13.2R2 or later revision, or 13.3R1 or later, with GRE tunnel interfaces configured on MPC1 Q, MPC2 Q, or MPC2 EQ modules.

Requirements

This example uses the following Juniper Networks hardware and Junos OS software:

  • Transport network—An IPv4 network running Junos OS Release 13.3.

  • GRE tunnel device—One MX80 router installed as an ingress provider edge (PE) router.

  • Input and output logical interfaces configurable on two ports of the built-in 10-Gigabit Ethernet Modular Interface Card (MIC):

Overview

In this example, you configure the router with input and output logical interfaces for IPv4 traffic, and then you convert the output logical interface to four GRE tunnel source interfaces. You also install static routes in the routing table so that input traffic is routed to the four GRE tunnels.

Note:

Before you apply a traffic control profile with a scheduler-map and shaping rate to a GRE tunnel interface, you must configure and commit a hierarchical scheduler on the GRE tunnel physical interface, specifying a maximum of two hierarchical scheduling levels for node scaling.

Configuration

To configure scheduling and shaping in hierarchical CoS queues for traffic routed to GRE tunnel interfaces configured on MPC1Q, MPC2Q, or MPC2 EQ modules on an MX Series router, perform these tasks:

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes

Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces

Configuring Interfaces, Hierarchical Scheduling on the GRE Tunnel Physical Interface, and Static Routes

Step-by-Step Procedure

To configure GRE tunnel interfaces (including enabling hierarchical scheduling) and static routes:

  1. Configure the amount of bandwidth for tunnel services on the physical interface.

  2. Configure the GRE tunnel device output logical interface.

  3. Configure the GRE tunnel device output logical interface.

  4. Convert the output logical interface to four GRE tunnel interfaces.

  5. Enable the GRE tunnel interfaces to use hierarchical scheduling.

  6. Install static routes in the routing table so that the device routes IPv4 traffic to the GRE tunnel source interfaces.

    Traffic destined to the subnets 10.2.2.0/24, 10.3.3.0/24, 10.4.4.0/24, and 10.5.5.0/24 is routed to the tunnel interfaces at IP addresses 10.70.1.1, 10.80.1.1, 10.90.1.1, and 10.100.1.1, respectively.

  7. If you are done configuring the device, commit the configuration.

Results

From configuration mode, confirm your configuration by entering the show chassis fpc 1 pic 1, show interfaces ge-1/1/0, show interfaces ge-1/1/1, show interfaces gr-1/1/10, and show routing-options commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Confirm the configuration of interfaces, hierarchical scheduling on the GRE tunnel physical interface, and static routes.

Measuring GRE Tunnel Transmission Rates Without Shaping Applied

Step-by-Step Procedure

To establish a baseline measurement, note the transmission rates at each GRE tunnel source.

  1. Pass traffic through the GRE tunnel at logical interfaces gr-1/1/10.1, gr-1/1/10.2, and gr-1/1/10.3.

  2. To display the traffic rates at each GRE tunnel source, use the show interfaces queue operational mode command.

    The following example command output shows detailed CoS queue statistics for logical interface gr-1/1/10.1 (the GRE tunnel from source IP address 10.70.1.1 to destination IP address 10.70.1.3).

    Note:

    This step shows command output for queue 0 (forwarding class be) only.

    The command output shows that the GRE tunnel device transmits traffic from queue 0 at a rate of 4879 pps. Allowing for 182 bytes per Layer 3 packet, preceded by 24 bytes of GRE overhead (a 20-byte delivery header consisting of the IPv4 packet header followed by 4 bytes for GRE flags plus encapsulated protocol type), the traffic rate received at the tunnel destination device is 8,040,592 bps:

    The command output shows that the GRE tunnel device transmits traffic from queue 0 at a rate of 4879 pps. Allowing for 182 bytes per Layer 3 packet, preceded by 24 bytes of GRE overhead (a 20-byte delivery header consisting of the IPv4 packet header followed by 4 bytes for GRE flags plus encapsulated protocol type), the traffic rate received at the tunnel destination device is 8,040,592 bps:

Configuring Output Scheduling and Shaping at GRE Tunnel Physical and Logical Interfaces

Step-by-Step Procedure

To configure the GRE tunnel device with scheduling and shaping at GRE tunnel physical and logical interfaces:

  1. Define eight transmission queues.

    Note:

    To configure up to eight forwarding classes with one-to-one mapping to output queues for interfaces on M120 , M320, MX Series, and T Series routers and EX Series switches, use the queue statement at the [edit class-of-service forwarding-classes] hierarchy level.

    If you need to configure up to 16 forwarding classes with multiple forwarding classes mapped to single queues for those interface types, use the class statement instead.

  2. Configure BA classifier gr-inet that, based on IPv4 precedence bits set in an incoming packet, sets the forwarding class, loss-priority value, and DSCP bits of the packet.

  3. Apply BA classifier gr-inet to the GRE tunnel device input at logical interface ge-1/1/0.0.

  4. Define a scheduler for each forwarding class.

  5. Define a scheduler map for each of three GRE tunnels.

  6. Define traffic control profiles for three GRE tunnel interfaces.

  7. Apply CoS scheduling and shaping to the output traffic at the physical interface and logical interfaces.

  8. If you are done configuring the device, commit the configuration.

Results

From configuration mode, confirm your configuration by entering the show class-of-service forwarding-classes, show class-of-service classifiers, show class-of-service interfaces ge-1/1/0, show class-of-service schedulers, show class-of-service scheduler-maps, show class-of-service traffic-control-profiles, and show class-of-service interfaces gr-1/1/10 commands. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

Confirm the configuration of output scheduling and shaping at the GRE tunnel physical and logical interfaces.

Verification

Confirm that the configurations are working properly.

Verifying That Scheduling and Shaping Are Attached to the GRE Tunnel Interfaces

Purpose

Verify the association of traffic control profiles with GRE tunnel interfaces.

Action

Verify the traffic control profile attached to the GRE tunnel physical interface by using the show class-of-service interface gr-1/1/10 detail operational mode command.

Meaning

Ingress IPv4 traffic routed to GRE tunnels on the device is subject to CoS output scheduling and shaping.

Verifying That Scheduling and Shaping Are Functioning at the GRE Tunnel Interfaces

Purpose

Verify the traffic rate shaping at the GRE tunnel interfaces.

Action

  1. Pass traffic through the GRE tunnel at logical interfaces gr-1/1/10.1, gr-1/1/10.2, and gr-1/1/10.3.

  2. To verify the rate shaping at each GRE tunnel source, use the show interfaces queue operational mode command.

    The following example command output shows detailed CoS queue statistics for logical interface gr-1/1/10.1 (the GRE tunnel from source IP address 10.70.1.1 to destination IP address 10.70.1.3):

    Note:

    This step shows command output for queue 0 (forwarding class be) and queue 1 (forwarding class ef) only.

Meaning

Now that traffic shaping is attached to the GRE tunnel interfaces, the command output shows that traffic shaping specified for the tunnel at logical interface gr-1/1/10.1 (shaping-rate 8m and guaranteed-rate 3m) is honored.

  • For queue 0, the GRE tunnel device transmits traffic at a rate of 3039 pps. The traffic rate received at the tunnel destination device is 5,008,272 bps:

  • For queue 0, the GRE tunnel device transmits traffic at a rate of 1218 pps. The traffic rate received at the tunnel destination device is 2,007,264 bps:

Compare these statistics to the baseline measurements taken without traffic shaping, as described in Measuring GRE Tunnel Transmission Rates Without Shaping Applied.