示例:在分层 CoS 队列中为路由到 GRE 隧道的流量执行输出调度和整形
此示例说明如何配置通用路由封装 (GRE) 隧道设备,以对路由到 GRE 隧道的 IPv4 流量执行 CoS 输出调度和整形。运行 Junos OS 12.3R4 或更高版本、13.2R2 或更高版本、13.2R2 或更高版本的 MX 系列路由器(在 MPC1 Q、MPC2 Q 或 MPC2 EQ 模块上配置 GRE 隧道接口)支持此功能。
要求
此示例使用以下瞻博网络硬件和 Junos OS 软件:
传输网络 — 运行 Junos OS 13.3 版的 IPv4 网络。
GRE 隧道设备 — 一台 MX80 路由器安装为入口提供商边缘 (PE) 路由器。
输入和输出逻辑接口可在内置 10 千兆以太网模块化接口卡 (MIC) 的两个端口上配置:
用于接收要通过网络传输的流量的输入逻辑接口
ge-1/1/0.0。输出逻辑接口
ge-1/1/1.0、ge-1/1/1.1并ge-1/1/1.2转换为 GRE 隧道源接口gr-1/1/10.1和gr-1/1/10.2gr-1/1/10.3。有关 MX80 路由器中模块上托管的接口的信息,请参阅以下主题:
概述
在此示例中,您将路由器配置为 IPv4 流量的输入和输出逻辑接口,然后将输出逻辑接口转换为四个 GRE 隧道源接口。您还在路由表中安装静态路由,以便将输入流量路由到四个 GRE 隧道。
在向 GRE 隧道接口应用带有调度器映射和整形速率的流量控制配置文件之前,您必须在 GRE 隧道物理接口上配置并提交一个层次化调度器,以便为节点扩展指定最多两个层次计划级别。
配置
要为路由到 MX 系列路由器上 MPC1Q、MPC2Q 或 MPC2 EQ 模块上配置的 GRE 隧道接口的流量在分层 CoS 队列中配置调度和整形,请执行以下操作:
CLI 快速配置
要快速配置此示例,请复制以下命令,将其粘贴到文本文件中,删除所有换行符,更改详细信息,以便与网络配置匹配,然后将命令复制并粘贴到层级的 [edit] CLI 中。
配置接口、GRE 隧道物理接口上的分层调度和静态路由
set chassis fpc 1 pic 1 tunnel-services bandwidth 1g set interfaces ge-1/1/0 unit 0 family inet address 10.6.6.1/24 set interfaces ge-1/1/1 unit 0 family inet address 10.70.1.1/24 arp 10.70.1.3 mac 00:00:03:00:04:00 set interfaces ge-1/1/1 unit 0 family inet address 10.80.1.1/24 arp 10.80.1.3 mac 00:00:03:00:04:01 set interfaces ge-1/1/1 unit 0 family inet address 10.90.1.1/24 arp 10.90.1.3 mac 00:00:03:00:04:02 set interfaces ge-1/1/1 unit 0 family inet address 10.100.1.1/24 arp 10.100.1.3 mac 00:00:03:00:04:04 set interfaces gr-1/1/10 unit 1 family inet address 10.100.1.1/24 set interfaces gr-1/1/10 unit 1 tunnel source 10.70.1.1 destination 10.70.1.3 set interfaces gr-1/1/10 unit 2 family inet address 10.200.1.1/24 set interfaces gr-1/1/10 unit 2 tunnel source 10.80.1.1 destination 10.80.1.3 set interfaces gr-1/1/10 unit 3 family inet address 10.201.1.1/24 set interfaces gr-1/1/10 unit 3 tunnel source 10.90.1.1 destination 10.90.1.3 set interfaces gr-1/1/10 unit 4 family inet address 10.202.1.1/24 set interfaces gr-1/1/10 unit 4 tunnel source 10.100.1.1 destination 10.100.1.3 set interfaces gr-1/1/10 hierarchical-scheduler set routing-options static route 10.2.2.0/24 next-hop gr-1/1/10.1 set routing-options static route 10.3.3.0/24 next-hop gr-1/1/10.2 set routing-options static route 10.4.4.0/24 next-hop gr-1/1/10.3 set routing-options static route 10.5.5.0/24 next-hop gr-1/1/10.4
在 GRE 隧道物理和逻辑接口上配置输出调度和整形
set class-of-service forwarding-classes queue 0 be set class-of-service forwarding-classes queue 1 ef set class-of-service forwarding-classes queue 2 af set class-of-service forwarding-classes queue 3 nc set class-of-service forwarding-classes queue 4 be1 set class-of-service forwarding-classes queue 5 ef1 set class-of-service forwarding-classes queue 6 af1 set class-of-service forwarding-classes queue 7 nc1 set class-of-service classifiers inet-precedence gr-inet forwarding-class be loss-priority low code-points 000 set class-of-service classifiers inet-precedence gr-inet forwarding-class ef loss-priority low code-points 001 set class-of-service classifiers inet-precedence gr-inet forwarding-class af loss-priority low code-points 010 set class-of-service classifiers inet-precedence gr-inet forwarding-class nc loss-priority low code-points 011 set class-of-service classifiers inet-precedence gr-inet forwarding-class be1 loss-priority low code-points 100 set class-of-service classifiers inet-precedence gr-inet forwarding-class ef1 loss-priority low code-points 101 set class-of-service classifiers inet-precedence gr-inet forwarding-class af1 loss-priority low code-points 110 set class-of-service classifiers inet-precedence gr-inet forwarding-class nc1 loss-priority low code-points 111 set class-of-service interfaces ge-1/1/0 unit 0 classifiers inet-precedence gr-inet set class-of-service schedulers be_sch transmit-rate percent 30 set class-of-service schedulers ef_sch transmit-rate percent 40 set class-of-service schedulers af_sch transmit-rate percent 25 set class-of-service schedulers nc_sch transmit-rate percent 5 set class-of-service schedulers be1_sch transmit-rate percent 60 set class-of-service schedulers be1_sch priority low set class-of-service schedulers ef1_sch transmit-rate percent 40 set class-of-service schedulers ef1_sch priority medium-low set class-of-service schedulers af1_sch transmit-rate percent 10 set class-of-service schedulers af1_sch priority strict-high set class-of-service schedulers nc1_sch shaping-rate percent 10 set class-of-service schedulers nc1_sch priority high set class-of-service scheduler-maps sch_map_1 forwarding-class be scheduler be_sch set class-of-service scheduler-maps sch_map_1 forwarding-class ef scheduler ef_sch set class-of-service scheduler-maps sch_map_1 forwarding-class af scheduler af_sch set class-of-service scheduler-maps sch_map_1 forwarding-class nc scheduler nc_sch set class-of-service scheduler-maps sch_map_2 forwarding-class be scheduler be1_sch set class-of-service scheduler-maps sch_map_2 forwarding-class ef scheduler ef1_sch set class-of-service scheduler-maps sch_map_3 forwarding-class af scheduler af_sch set class-of-service scheduler-maps sch_map_3 forwarding-class nc scheduler nc_sch set class-of-service traffic-control-profiles gr-ifl-tcp3 guaranteed-rate 5m set class-of-service traffic-control-profiles gr-ifd-tcp shaping-rate 10m set class-of-service traffic-control-profiles gr-ifd-tcp-remain shaping-rate 7m set class-of-service traffic-control-profiles gr-ifd-tcp-remain guaranteed-rate 4m set class-of-service traffic-control-profiles gr-ifl-tcp1 scheduler-map sch_map_1 set class-of-service traffic-control-profiles gr-ifl-tcp1 shaping-rate 8m set class-of-service traffic-control-profiles gr-ifl-tcp1 guaranteed-rate 3m set class-of-service traffic-control-profiles gr-ifl-tcp2 scheduler-map sch_map_2 set class-of-service traffic-control-profiles gr-ifl-tcp2 guaranteed-rate 2m set class-of-service traffic-control-profiles gr-ifl-tcp3 scheduler-map sch_map_3 set class-of-service interfaces gr-1/1/10 output-traffic-control-profile gr-ifd-tcp set class-of-service interfaces gr-1/1/10 output-traffic-control-profile-remaining gr-ifd-remain set class-of-service interfaces gr-1/1/10 unit 1 output-traffic-control-profile gr-ifl-tcp1 set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp2 set class-of-service interfaces gr-1/1/10 unit 3 output-traffic-control-profile gr-ifl-tcp3
配置接口、GRE 隧道物理接口上的分层调度和静态路由
逐步过程
要配置 GRE 隧道接口(包括启用分层调度)和静态路由:
配置物理接口上的隧道服务的带宽量。
[edit] user@host# set chassis fpc 1 pic 1 tunnel-services bandwidth 1g
配置 GRE 隧道设备输出逻辑接口。
[edit] user@host# set interfaces ge-1/1/0 unit 0 family inet address 10.6.6.1/24
配置 GRE 隧道设备输出逻辑接口。
[edit] user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.70.1.1/24 arp 10.70.1.3 mac 00:00:03:00:04:00 user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.80.1.1/24 arp 10.80.1.3 mac 00:00:03:00:04:01 user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.90.1.1/24 arp 10.90.1.3 mac 00:00:03:00:04:02 user@host# set interfaces ge-1/1/1 unit 0 family inet address 10.100.1.1/24 arp 10.100.1.3 mac 00:00:03:00:04:04
将输出逻辑接口转换为四个 GRE 隧道接口。
[edit] user@host# set interfaces gr-1/1/10 unit 1 family inet address 10.100.1.1/24 user@host# set interfaces gr-1/1/10 unit 1 tunnel source 10.70.1.1 destination 10.70.1.3 user@host# set interfaces gr-1/1/10 unit 2 family inet address 10.200.1.1/24 user@host# set interfaces gr-1/1/10 unit 2 tunnel source 10.80.1.1 destination 10.80.1.3 user@host# set interfaces gr-1/1/10 unit 3 family inet address 10.201.1.1/24 user@host# set interfaces gr-1/1/10 unit 3 tunnel source 10.90.1.1 destination 10.90.1.3 user@host# set interfaces gr-1/1/10 unit 4 family inet address 10.202.1.1/24 user@host# set interfaces gr-1/1/10 unit 4 tunnel source 10.100.1.1 destination 10.100.1.3
启用 GRE 隧道接口以使用分层调度。
[edit] user@host# set interfaces gr-1/1/10 hierarchical-scheduler
在路由表中安装静态路由,以便设备将 IPv4 流量路由到 GRE 隧道源接口。
目的地子网的流量为 10.2.2.0/24、10.3.3.0/24, 10.4.4.0/24 和 10.5.5.0/24 分别以 10.70.1.1、10.80.1.1、10.90.1.1 和 10.100.1 路由至隧道接口。
[edit] user@host# set routing-options static route 10.2.2.0/24 next-hop gr-1/1/10.1 user@host# set routing-options static route 10.3.3.0/24 next-hop gr-1/1/10.2 user@host# set routing-options static route 10.4.4.0/24 next-hop gr-1/1/10.3 user@host# set routing-options static route 10.5.5.0/24 next-hop gr-1/1/10.4
完成设备配置后,提交配置。
[edit] user@host# commit
结果
在配置模式下,输入 、 show chassis fpc 1 pic 1show interfaces ge-1/1/0、 show interfaces ge-1/1/1、 show interfaces gr-1/1/10和show routing-options命令,以确认您的配置。如果输出未显示预期的配置,请重复此示例中的说明,以更正配置。
确认接口配置、GRE 隧道物理接口上的分层调度和静态路由。
user@host# show chassis fpc 1 pic 1
tunnel-services {
bandwidth 1g;
}
user@host# show interfaces ge-1/1/0
unit 0 {
family inet {
address 10.6.6.1/24;
]
}
user@host# show interfaces ge-1/1/1
unit 0 {
family inet {
address 10.70.1.1/24 (
arp 10.70.1.3 mac 00:00:03:00:04:00;
}
address 10.80.1.1/24 {
arp 10.80.1.3 mac 00:00:03:00:04:01;
}
address 10.90.1.1/24 {
arp 10.90.1.3 mac 00:00:03:00:04:02;
}
address 10.100.1.1/24 {
arp 10.100.1.3 mac 00:00:03:00:04:04;
}
]
}
user@host# show interfaces gr-1/1/10
hierarchical-scheduler;
unit 1 {
tunnel {
destination 10.70.1.3;
source 10.70.1.1;
}
family inet {
address 10.100.1.1/24;
}
}
unit 2 {
tunnel {
destination 10.80.1.3;
source 10.80.1.1;
}
family inet {
address 10.200.1.1/24;
}
}
unit 3 {
tunnel {
destination 10.90.1.3;
source 10.90.1.1;
}
family inet {
address 10.201.1.1/24;
}
}
unit 4 {
tunnel {
destination 10.100.1.3;
source 10.100.1.1;
}
family inet {
address 10.202.1.1/24;
}
}
user@host# show routing-options
static {
route 10.2.2.0/24 next-hop gr-1/1/10.1;
route 10.3.3.0/24 next-hop gr-1/1/10.2;
route 10.4.4.0/24 next-hop gr-1/1/10.3;
route 10.5.5.0/24 next-hop gr-1/1/10.4;
}
无需应用整形即可测量 GRE 隧道传输速率
逐步过程
要建立基准测量,请注意每个 GRE 隧道源的传输速率。
在逻辑接口
gr-1/1/10.1、gr-1/1/10.2和gr-1/1/10.3上通过 GRE 隧道传输流量。要显示每个 GRE 隧道源的流量速率,请使用
show interfaces queue操作模式命令。以下示例命令输出显示了逻辑接口 gr-1/1/10.1(从源 IP 地址 10.70.1.1 到目标 IP 地址 10.70.1.3 的 GRE 隧道)的详细 CoS 队列统计信息。
user@host> show interfaces queue gr-1/1/10.1 Logical interface gr-1/1/10.1 (Index 331) (SNMP ifIndex 4045) Forwarding classes: 16 supported, 8 in use Egress queues: 8 supported, 8 in use Burst size: 0 Queue: 0, Forwarding classes: be Queued: Packets : 31818312 102494 pps Bytes : 6522753960 168091936 bps Transmitted: Packets : 1515307 4879 pps Bytes : 310637935 8001632 bps Tail-dropped packets : 21013826 68228 pps RED-dropped packets : 9289179 29387 pps Low : 9289179 29387 pps Medium-low : 0 0 pps Medium-high : 0 0 pps High : 0 0 pps RED-dropped bytes : 1904281695 48194816 bps Low : 1904281695 48194816 bps Medium-low : 0 0 bps Medium-high : 0 0 bps High : 0 0 bps ...注意:这一步仅显示队列
0的命令输出(转发类be)。命令输出显示,GRE 隧道设备以 4879 pps 的速率从队列
0传输流量。每个第 3 层数据包 182 个字节,先有 24 个字节的 GRE 开销(由 IPv4 数据包标头组成的 20 字节交付标头,后接 4 个字节,用于 GRE 标志加封装协议类型),隧道目标设备接收的流量速率为 8,040,592 bps:命令输出显示,GRE 隧道设备以 4879 pps 的速率从队列
0传输流量。每个第 3 层数据包 182 个字节,先有 24 个字节的 GRE 开销(由 IPv4 数据包标头组成的 20 字节交付标头,后接 4 个字节,用于 GRE 标志加封装协议类型),隧道目标设备接收的流量速率为 8,040,592 bps:4879 packets/second X 206 bytes/packet X 8 bits/byte = 8,040,592 bits/second
在 GRE 隧道物理和逻辑接口上配置输出调度和整形
逐步过程
要通过 GRE 隧道物理和逻辑接口进行调度和整形来配置 GRE 隧道设备:
定义八个传输队列。
[edit] user@host# set class-of-service forwarding-classes queue 0 be user@host# set class-of-service forwarding-classes queue 1 ef user@host# set class-of-service forwarding-classes queue 2 af user@host# set class-of-service forwarding-classes queue 3 nc user@host# set class-of-service forwarding-classes queue 4 be1 user@host# set class-of-service forwarding-classes queue 5 ef1 user@host# set class-of-service forwarding-classes queue 6 af1 user@host# set class-of-service forwarding-classes queue 7 nc1
配置 BA 分类器,该分类器
gr-inet基于传入数据包中设置的 IPv4 优先级位,设置数据包的转发类、丢失优先级值和 DSCP 位。[edit] user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class be loss-priority low code-points 000 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class ef loss-priority low code-points 001 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class af loss-priority low code-points 010 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class nc loss-priority low code-points 011 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class be1 loss-priority low code-points 100 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class ef1 loss-priority low code-points 101 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class af1 loss-priority low code-points 110 user@host# set class-of-service classifiers inet-precedence gr-inet forwarding-class nc1 loss-priority low code-points 111
将 BA 分类器
gr-inet应用于逻辑接口 ge-1/1/0.0 上的 GRE 隧道设备输入。[edit] user@host# set class-of-service interfaces ge-1/1/0 unit 0 classifiers inet-precedence gr-inet
为每个转发类定义一个调度器。
[edit] user@host# set class-of-service schedulers be_sch transmit-rate percent 30 user@host# set class-of-service schedulers ef_sch transmit-rate percent 40 user@host# set class-of-service schedulers af_sch transmit-rate percent 25 user@host# set class-of-service schedulers nc_sch transmit-rate percent 5 user@host# set class-of-service schedulers be1_sch transmit-rate percent 60 user@host# set class-of-service schedulers be1_sch priority low user@host# set class-of-service schedulers ef1_sch transmit-rate percent 40 user@host# set class-of-service schedulers ef1_sch priority medium-low user@host# set class-of-service schedulers af1_sch transmit-rate percent 10 user@host# set class-of-service schedulers af1_sch priority strict-high user@host# set class-of-service schedulers nc1_sch shaping-rate percent 10 user@host# set class-of-service schedulers nc1_sch priority high
为三个 GRE 隧道中的每条都定义一个调度器图。
[edit] user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class be scheduler be_sch user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class ef scheduler ef_sch user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class af scheduler af_sch user@host# set class-of-service scheduler-maps sch_map_1 forwarding-class nc scheduler nc_sch user@host# set class-of-service scheduler-maps sch_map_2 forwarding-class be scheduler be1_sch user@host# set class-of-service scheduler-maps sch_map_2 forwarding-class ef scheduler ef1_sch user@host# set class-of-service scheduler-maps sch_map_3 forwarding-class af scheduler af_sch user@host# set class-of-service scheduler-maps sch_map_3 forwarding-class nc scheduler nc_sch
为三个 GRE 隧道接口定义流量控制配置文件。
[edit] user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 scheduler-map sch_map_1 user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 shaping-rate 8m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp1 guaranteed-rate 3m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp2 scheduler-map sch_map_2 user@host# set class-of-service traffic-control-profiles gr-ifl-tcp2 guaranteed-rate 2m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp3 scheduler-map sch_map_3 user@host# set class-of-service traffic-control-profiles gr-ifl-tcp3 guaranteed-rate 5m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp shaping-rate 10m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp-remain shaping-rate 7m user@host# set class-of-service traffic-control-profiles gr-ifl-tcp-remain guaranteed-rate 4m
将 CoS 调度和整形应用于物理接口和逻辑接口上的输出流量。
[edit] user@host# set class-of-service interfaces gr-1/1/10 output-traffic-control-profile gr-ifd-tcp user@host# set class-of-service interfaces gr-1/1/10 output-traffic-control-profile-remaining gr-ifd-remain user@host# set class-of-service interfaces gr-1/1/10 unit 1 output-traffic-control-profile gr-ifl-tcp1 user@host# set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp2 user@host# set class-of-service interfaces gr-1/1/10 unit 2 output-traffic-control-profile gr-ifl-tcp3
完成设备配置后,提交配置。
[edit] user@host# commit
结果
在配置模式下,输入 、 show class-of-service forwarding-classesshow class-of-service classifiers、 、 show class-of-service interfaces ge-1/1/0show class-of-service schedulers、 show class-of-service scheduler-maps、 show class-of-service traffic-control-profiles和show class-of-service interfaces gr-1/1/10命令,以确认您的配置。如果输出未显示预期的配置,请重复此示例中的说明,以更正配置。
确认在 GRE 隧道物理和逻辑接口上配置输出调度和整形。
user@host# show class-of-service forwarding-classes
queue 0 be;
queue 1 ef;
queue 2 af;
queue 3 nc;
queue 4 be1;
queue 5 ef1;
queue 6 af1;
queue 7 nc1;
user@host# show class-of-service classifiers
inet-precedence gr-inet {
forwarding-class be {
loss-priority low code-points 000;
}
forwarding-class ef {
loss-priority low code-points 001;
}
forwarding-class af {
loss-priority low code-points 010;
}
forwarding-class nc {
loss-priority low code-points 011;
}
forwarding-class be1 {
loss-priority low code-points 100;
}
forwarding-class ef1 {
loss-priority low code-points 101;
}
forwarding-class af1 {
loss-priority low code-points 110;
}
forwarding-class nc1 {
loss-priority low code-points 111;
}
}
user@host# show class-of-service interfaces ge-1/1/0
unit 0 {
classifiers {
inet-precedence gr-inet;
}
}
user@host# show class-of-service schedulers
be_sch {
transmit-rate percent 30;
}
ef_sch {
transmit-rate percent 40;
}
af_sch {
transmit-rate percent 25;
}
nc_sch {
transmit-rate percent 5;
}
be1_sch {
transmit-rate percent 60;
priority low;
}
ef1_sch {
transmit-rate percent 40;
priority medium-low;
}
af1_sch {
transmit-rate percent 10;
priority strict-high;
}
nc1_sch {
shaping-rate percent 10;
priority high;
}
user@host# show class-of-service scheduler-maps
sch_map_1 {
forwarding-class be scheduler be_sch;
forwarding-class ef scheduler ef_sch;
forwarding-class af scheduler af_sch;
forwarding-class nc scheduler nc_sch;
}
sch_map_2 {
forwarding-class be scheduler be1_sch;
forwarding-class ef scheduler ef1_sch;
}
sch_map_3 {
forwarding-class af scheduler af_sch;
forwarding-class nc scheduler nc_sch;
}
user@host# show class-of-service traffic-control-profiles
gr-ifl-tcp1 {
scheduler-map sch_map_1;
shaping-rate 8m;
guaranteed-rate 3m;
}
gr-ifl-tcp2 {
scheduler-map sch_map_2;
guaranteed-rate 2m;
}
gr-ifl-tcp3 {
scheduler-map sch_map_3;
guaranteed-rate 5m;
}
gr-ifd-remain {
shaping-rate 7m;
guaranteed-rate 4m;
}
gr-ifd-tcp {
shaping-rate 10m;
}
user@host# show class-of-service interfaces gr-1/1/10
gr-1/1/10 {
output-traffic-control-profile gr-ifd-tcp;
output-traffic-control-profile-remaining gr-ifd-remain;
unit 1 {
output-traffic-control-profile gr-ifl-tcp1;
}
unit 2 {
output-traffic-control-profile gr-ifl-tcp2;
}
unit 3 {
output-traffic-control-profile gr-ifl-tcp3;
}
}
验证
确认配置工作正常。
验证调度和整形是否已连接到 GRE 隧道接口
目的
验证流量控制配置文件与 GRE 隧道接口的关联。
行动
使用操作模式命令验证连接到 GRE 隧道物理接口的 show class-of-service interface gr-1/1/10 detail 流量控制配置文件。
-
user@host> show class-of-service interface gr-1/1/10 detail Physical interface: gr-1/1/10, Enabled, Physical link is Up Type: GRE, Link-level type: GRE, MTU: Unlimited, Speed: 1000mbps Device flags : Present Running Interface flags: Point-To-Point SNMP-Traps Physical interface: gr-1/1/10, Index: 220 Queues supported: 8, Queues in use: 8 Output traffic control profile: gr-ifd-tcp, Index: 17721 Output traffic control profile remaining: gr-ifd-remain, Index: 58414 Congestion-notification: Disabled Logical interface gr-1/1/10.1 Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.70.1.3:10.70.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down inet 10.100.1.1/24 Logical interface: gr-1/1/10.1, Index: 331 Object Name Type Index Traffic-control-profile gr-ifl-tcp1 Output 17849 Classifier ipprec-compatibility ip 13 Logical interface gr-1/1/10.2 Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.80.1.3:10.80.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down inet 10.200.1.1/24 Logical interface: gr-1/1/10.2, Index: 332 Object Name Type Index Traffic-control-profile gr-ifl-tcp2 Output 17856 Classifier ipprec-compatibility ip 13 Logical interface gr-1/1/10.3 Flags: Point-To-Point SNMP-Traps 0x4000 IP-Header 10.90.1.3:10.90.1.1:47:df:64:0000000000000000 Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down inet 10.201.1.1/24 Logical interface: gr-1/1/10.3, Index: 333 Object Name Type Index Traffic-control-profile gr-ifl-tcp3 Output 17863 Classifier ipprec-compatibility ip 13
意义
路由到设备上的 GRE 隧道的入口 IPv4 流量会受 CoS 输出调度和整形的约束。
验证 GRE 隧道接口上的调度和整形是否正常运行
目的
验证 GRE 隧道接口上的流量速率整形。
行动
在逻辑接口
gr-1/1/10.1、gr-1/1/10.2和gr-1/1/10.3上通过 GRE 隧道传输流量。要验证每个 GRE 隧道源的速率整形,请使用
show interfaces queue操作模式命令。以下示例命令输出显示了逻辑接口 gr-1/1/10.1(从源 IP 地址 10.70.1.1 到目标 IP 地址 10.70.1.3 的 GRE 隧道)的详细 CoS 队列统计信息:
user@host> show interfaces queue gr-1/1/10.1 Logical interface gr-1/1/10.1 (Index 331) (SNMP ifIndex 4045) Forwarding classes: 16 supported, 8 in use Egress queues: 8 supported, 8 in use Burst size: 0 Queue: 0, Forwarding classes: be Queued: Packets : 59613061 51294 pps Bytes : 12220677505 84125792 bps Transmitted: Packets : 2230632 3039 pps Bytes : 457279560 4985440 bps Tail-dropped packets : 4471146 2202 pps RED-dropped packets : 52911283 46053 pps Low : 49602496 46053 pps Medium-low : 0 0 pps Medium-high : 0 0 pps High : 3308787 0 pps RED-dropped bytes : 10846813015 75528000 bps Low : 10168511680 75528000 bps Medium-low : 0 0 bps Medium-high : 0 0 bps High : 678301335 0 bps Queue: 1, Forwarding classes: ef Queued: Packets : 15344874 51295 pps Bytes : 3145699170 84125760 bps Transmitted: Packets : 366115 1218 pps Bytes : 75053575 1997792 bps Tail-dropped packets : 364489 1132 pps RED-dropped packets : 14614270 48945 pps Low : 14614270 48945 pps Medium-low : 0 0 pps Medium-high : 0 0 pps High : 0 0 pps RED-dropped bytes : 2995925350 80270528 bps Low : 2995925350 80270528 bps Medium-low : 0 0 bps Medium-high : 0 0 bps High : 0 0 bps ...注意:这一步仅显示队列
0(转发类be)和队列1(转发类ef)的命令输出。
意义
现在,流量整形已连接到 GRE 隧道接口,命令输出显示,符合逻辑接口 gr-1/1/10.1(shaping-rate 8m 和 guaranteed-rate 3m)上为隧道指定的流量整形。
对于队列
0,GRE 隧道设备以 3039 pps 的速率传输流量。隧道目标设备接收的流量速率为 5,008,272 bps:3039 packets/second X 206 bytes/packet X 8 bits/byte = 5,008,272 bits/second
对于队列
0,GRE 隧道设备以 1218 pps 的速率传输流量。隧道目标设备接收的流量速率为 2,007,264 bps:1218 packets/second X 206 bytes/packet X 8 bits/byte = 2,007,264 bits/second
将这些统计数据与不进行流量整形的基准测量进行比较,如未 应用整形的测量 GRE 隧道传输速率中所述。