Juniper Networks
Log in
|
How to Buy
|
Contact Us
|
United States (Change)
Choose Country
Close

Choose Country

North America

  • United States

Europe

  • Deutschland - Germany
  • España - Spain
  • France
  • Italia - Italy
  • Россия - Russia
  • United Kingdom

Asia Pacific

  • Asean Region (Vietnam, Indonesia, Singapore, Malaysia)
  • Australia
  • 中国 - China
  • India
  • 日本 - Japan
  • 대한민국 - Korea
  • 台灣 - Taiwan
Solutions
Products & Services
Company
Partners
Support
Education
Community
Security Intelligence Center

Technical Documentation

Download Software
Research a Problem Login required
Case Management Login required
Contract & Product Management Login required
Technical Documentation
Documentation Archive
Enterprise MIBs
File Format Help
Glossary
Portable Libraries
End-of-Life Products
Contact Support
Guidelines and Policies
Security Resources
Home > Support > Technical Documentation > JunosE Software > Scaling Subscribers on the TFA ASIC with QoS
Print
Rate and give feedback:  Feedback Received. Thank You!
Rate and give feedback: 
Close
This document helped resolve my issue.  Yes No

Additional Comments

800 characters remaining

May we contact you if necessary?

Name:  
E-mail: 
Submitting...
 

Related Documentation

  • For more information about system resource requirements for shadow nodes, see Managing System Resources for Shadow Nodes
  • For QoS system maximums, see JunosE Release Notes, Appendix A, System Maximums
  • Monitoring the QoS Profiles Attached to an Interface
 

Scaling Subscribers on the TFA ASIC with QoS

The TFA ASIC on the ES2 10G LM supports a total of 32,000 nodes; however, it requires that each queue stack above a node at both level 1 and level 2, and it cannot skip a level in the scheduler hierarchy. The FFA ASIC also requires that each queue stack above a node at both level 1 and 2, but it also offers more nodes, so the scheduler hierarchy requirement is not as visible. The EFA ASIC does not require queues to stack above any level.

Because the TFA ASIC cannot skip a level in the hierarchy and also offers a smaller amount of nodes, scaling subscribers for triple-play configurations can exhaust node resources. For example, the ethernet-default QoS profile specifies both an IP and a VLAN node. Configuring 16,000 IP over VLAN subinterfaces consumes all 32,000 nodes, with no node resources remaining for other traffic-class groups. By carefully configuring queues on the TFA ASIC, you can scale up to 16,000 subscribers for multiple traffic-class groups in a triple-play configuration.

To conserve nodes on the TFA ASIC, you could apply one of the following configurations:

  • If the configuration includes IP and VLANs, you can configure shapers within those queues to control service throughout. For example, in a triple-play environment with voice, video, and data service, you might want to limit the overall rate of traffic using a shared shaper.

    At the same time, you might want to individually restrict the maximum rate of each of the services. To conserve node usage, attach shapers to the queue for each service, and attach the shared shaper to the best-effort queue. These queues must be at level 3 in the scheduler hierarchy. Typically, aggregation nodes such as an S-VLAN are placed at level 2. The VLAN queues then feed in to the S-VLAN nodes, which then feed to the level 1 nodes below.

    If you do not create a QoS hierarchy with queues at level 3, the system adds phantom nodes to enforce this requirement. To display the hierarchy that is created for the subscriber on the line module, issue the show qos scheduler-hierarchy command.

  • If the configuration includes S-VLANs, you could configure S-VLAN nodes in the default traffic-class group. Combining S-VLAN and VLAN nodes uses fewer resources than when you combine IP and VLAN nodes. You can also configure additional S-VLAN nodes in other traffic-class groups.

In non-default traffic-class groups, you can configure a group node and VLAN queues. Although this apparently does not consume nodes, it does consume a hidden, phantom node for each queue, to satisfy the level requirement of the TFA ASIC.

Alternatively, use group nodes and shadow nodes.

We recommend that you configure an Ethernet shadow node in the group with the following QoS profile rule:

host1(config-qos-profile)#ethernet shadow-node group groupname

This rule stacks another node over the group node, so all VLAN queues are stacked above the single shadow node. No nodes are consumed in the traffic-class group.

 

Related Documentation

  • For more information about system resource requirements for shadow nodes, see Managing System Resources for Shadow Nodes
  • For QoS system maximums, see JunosE Release Notes, Appendix A, System Maximums
  • Monitoring the QoS Profiles Attached to an Interface
 

Published: 2011-03-21

 
  • About Juniper
  • The New Network
  • Investor Relations
  • Press Releases
  • Newsletters
  • Juniper Offices
  • Resources
  • How to Buy
  • Partner Locator
  • Image Library
  • Visio Templates
  • Security Center
  • Community
  • Forums
  • Blogs
  • Junos Central
  • Social Media
  • Support
  • Technical Documentation
  • Knowledge Base (KB)
  • Software Downloads
  • Product Licensing
  • Contact Support
Site Map / RSS Feeds / Careers / Accessibility / Feedback / Privacy & Policy / Legal Notices
Copyright© 1999-2011 Juniper Networks, Inc. All rights reserved.
Help
|
My Account
|
Log Out