Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Layer 2 Forwarding Tables

 

Layer 2 Learning and Forwarding for VLANs Overview

Understanding Layer 2 Forwarding Tables on Switches, Routers and NFX Series Devices

You can configure Layer 2 MAC address and VLAN learning and forwarding properties in support of Layer 2 bridging. Unicast media access control (MAC) addresses are learned to avoid flooding the packets to all the ports in a VLAN. A source MAC entry is created in its source and destination MAC tables for each MAC address learned from packets received on ports that belong to the VLAN.

When you configure a VLAN, Layer 2 address learning is enabled by default. The VLAN learns unicast media access control (MAC) addresses to avoid flooding the packets to all the ports in the VLAN. Each VLAN creates a source MAC entry in its source and destination MAC tables for each source MAC address learned from packets received on the ports that belong to the VLAN.

Note

Traffic is not flooded back onto the interface on which it was received. However, because this “split horizon” occurs at a late stage, the packet statistics displayed by commands such as show interfaces queue will include flood traffic.

You can optionally disable MAC learning either for the entire device or for a specific VLAN or logical interface. You can also configure the following Layer 2 learning and forwarding properties:

  • Timeout interval for MAC entries

  • Static MAC entries for logical interfaces only

  • Limit to the number of MAC addresses learned from a specific logical interface or from all the logical interfaces in a VLAN

  • Size of the MAC address table for the VLAN

  • MAC accounting for a VLAN

Understanding Layer 2 Forwarding Tables on Security Devices

The SRX Series device maintains forwarding tables that contain MAC addresses and associated interfaces for each Layer 2 VLAN. When a packet arrives with a new source MAC address in its frame header, the device adds the MAC address to its forwarding table and tracks the interface at which the packet arrived. The table also contains the corresponding interface through which the device can forward traffic for a particular MAC address.

If the destination MAC address of a packet is unknown to the device (that is, the destination MAC address in the packet does not have an entry in the forwarding table), the device duplicates the packet and floods it on all interfaces in the VLAN other than the interface on which the packet arrived. This is known as packet flooding and is the default behavior for the device to determine the outgoing interface for an unknown destination MAC address. Packet flooding is performed at two levels: packets are flooded to different zones as permitted by configured Layer 2 security policies, and packets are also flooded to different interfaces with the same VLAN identifier within the same zone. The device learns the forwarding interface for the MAC address when a reply with that MAC address arrives at one of its interfaces.

You can specify that the SRX Series device use ARP queries and traceroute requests (which are ICMP echo requests with the time-to-live values set to 1) instead of packet flooding to locate an unknown destination MAC address. This method is considered more secure than packet flooding because the device floods ARP queries and traceroute packets—not the initial packet—on all interfaces. When ARP or traceroute flooding is used, the original packet is dropped. The device broadcasts an ARP or ICMP query to all other devices on the same subnetwork, requesting the device at the specified destination IP address to send back a reply. Only the device with the specified IP address replies, which provides the requestor with the MAC address of the responder.

ARP allows the device to discover the destination MAC address for a unicast packet if the destination IP address is in the same subnetwork as the ingress IP address. (The ingress IP address refers to the IP address of the last device to send the packet to the device. The device might be the source that sent the packet or a router forwarding the packet.) Traceroute allows the device to discover the destination MAC address even if the destination IP address belongs to a device in a subnetwork beyond that of the ingress IP address.

When you enable ARP queries to locate an unknown destination MAC address, traceroute requests are also enabled. You can also optionally specify that traceroute requests not be used; however, the device can then discover destination MAC addresses for unicast packets only if the destination IP address is in the same subnetwork as the ingress IP address.

Whether you enable ARP queries and traceroute requests or ARP-only queries to locate unknown destination MAC addresses, the SRX Series device performs the following series of actions:

  1. The device notes the destination MAC address in the initial packet. The device adds the source MAC address and its corresponding interface to its forwarding table, if they are not already there.
  2. The device drops the initial packet.
  3. The device generates an ARP query packet and optionally a traceroute packet and floods those packets out all interfaces except the interface on which the initial packet arrived.

    ARP packets are sent out with the following field values:

    • Source IP address set to the IP address of the IRB

    • Destination IP address set to the destination IP address of the original packet

    • Source MAC address set to the MAC address of the IRB

    • Destination MAC address set to the broadcast MAC address (all 0xf)

    Traceroute (ICMP echo request or ping) packets are sent out with the following field values:

    • Source IP address set to the IP address of the original packet

    • Destination IP address set to the destination IP address of the original packet

    • Source MAC address set to the source MAC address of the original packet

    • Destination MAC address set to the destination MAC address of the original packet

    • Time-to-live (TTL) set to 1

  4. Combining the destination MAC address from the initial packet with the interface leading to that MAC address, the device adds a new entry to its forwarding table.
  5. The device forwards all subsequent packets it receives for the destination MAC address out the correct interface to the destination.

Layer 2 Learning and Forwarding for VLANs Acting as a Switch for a Layer 2 Trunk Port

Layer 2 learning is enabled by default. A set of VLANs, configured to function as a switch with a Layer 2 trunk port, learns unicast media access control (MAC) addresses to avoid flooding packets to the trunk port.

Note

Traffic is not flooded back onto the interface on which it was received. However, because this “split horizon” occurs at a late stage, the packet statistics displayed by commands such as show interfaces queue will include flood traffic.

You can optionally disable Layer 2 learning for the entire set of VLANs as well as modify the following Layer 2 learning and forwarding properties:

  • Limit the number of MAC addresses learned from the Layer 2 trunk port associated with the set of VLANs

  • Modify the size of the MAC address table for the set of VLANs

  • Enable MAC accounting for the set of VLANs

Understanding the Unified Forwarding Table

Benefits of Unified Forwarding Tables

Traditionally, forwarding tables have been statically defined and have supported only a fixed number of entries for each type of address. The unified forward table provides the following benefits:

  • Enables you to allocate forwarding table resources to optimize the memory available for different address types based on the needs of your network.

  • Enables you to allocate a higher percentage of memory for one type of address or another.

Using the Unified Forwarding Table to Optimize Address Storage

On the QFX5100, EX4600, EX4650, QFX5110, QFX5200, and QFX5120 switches, you can control the allocation of forwarding table memory available to store the following:

  • MAC addresses—In a Layer 2 environment, the switch learns new MAC addresses and stores them in a MAC address table

  • Layer 3 host entries–In a Layer 2 and Layer 3 environment, the switch learns which IP addresses are mapped to which MAC addresses; these key-value pairs are stored in the Layer 3 host table.

  • Longest prefix match (LPM) table entries—In a Layer 3 environment, the switch has a routing table and the most specific route has an entry in the forwarding table to associate a prefix or netmask to a next hop. Note, however, that all IPv4 /32 prefixes and IPv6 /128 prefixes are stored in the Layer 3 host table.

UFT essentially combines the three distinct forwarding tables to create one table with flexible resource allocation. You can select one of five forwarding table profiles that best meets your network needs. Each profile is configured with different maximum values for each type of address. For example, for a switch that handles a great deal of Layer 2 traffic, such as a virtualized network with many servers and virtualized machines, you would likely choose a profile that allocates a higher percentage of memory to MAC addresses. For a switch that operates in the core of a network, participates in an IP fabric, you probably want to maximize the number of routing table entries it can store. In this case, you would choose a profile that allocates a higher percentage of memory to longest match prefixes. The QFX5200 switch supports a custom profile that allows you to partition the four available shared memory banks with a total of 128,000 entries among MAC addresses, Layer 3 host addresses, and LPM prefixes.

Note

Support for QFX5200 switches was introduced in Junos OS Release 15.1x53-D30. The QFX5200 switch is not supported on Junos OS Release 16.1R1.

Understanding the Allocation of MAC Addresses and Host Addresses

All five profiles are supported, each of which allocates different amounts of memory for Layer 2 or Layer 3 entries, enabling you choose one that best suits the needs of your network. The QFX5200 and QFX5210 switches, however, supports different maximum values for each profile from the other switches. For more information about the custom profile, see Configuring the Unified Forwarding Table on Switches.

Note

The default profile is l2-profile-three, which allocates equal space for MAC Addresses and Layer 3 host addresses. On QFX5100, EX4600, QFX5110, and QFX5200 switches, the space is equal to 16,000 IPv4 entries for the LPM table, and on QFX5210 switches, the space is equal to 32,000 IPv4 entries for the LPM table. For the lpm-profile the LPM table size is equal to 256,000 IPv4 entries.

Note

Starting with Junos OS Release 18.1R1 on the QFX5210-64C switch, for all these profiles, except for the lpm-profile the longest prefix match (LPM) table size is equal to 32,000 IPv4 entries.

Note

Starting with Junos OS Release 18.3R1 on the QFX5120 and EX4650 switches, for all these profiles, except for the lpm-profile the longest prefix match (LPM) table size is equal to 32,000 IPv4 entries.

Note

On QFX5100, EX4600, EX4650, QFX5110, QFX5200, QFX5120, and QFX5210-64C switches, IPv4 and IPv6 host routes with ECMP next hops are stored in the host table.

Best Practice

If the host or LPM table stores the maximum number of entries for any given type of entry, the entire shared table is full and is unable to accommodate any entries of any other type. Different entry types occupy different amounts of memory. For example, an IPv6 unicast address occupies twice as much memory as an IPv4 unicast address, and an IPv6 multicast address occupies four times as much memory as an IPv4 unicast address.

Table 1 lists the profiles you can choose and the associated maximum values for the MAC address and host table entries on QFX5100 and EX4600 switches.

Table 1: Unified Forwarding Table Profiles on QFX5100 and EX4600 Switches

Profile NameMAC TableHost Table (unicast and multicast addresses)
 MAC AddressesIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)

l2-profile-one

288K

16K

8K

8K

8K

4K

4K

l2-profile-two

224K

80K

40K

40K

40K

20K

20K

l2-profile-three (default)

160K

144K

72K

72K

72K

36K

36K

l3-profile

96K

208K

104K

104K

104K

52K

52K

lpm-profile

32K

16K

8K

8K

8K

4K

4K

lpm-profilewith unicast-in-lpm option

32K

(stored in LPM table)

(stored in LPM table)

8K

8K

4K

4K

Table 2 lists the profiles you can choose and the associated maximum values for the MAC address and host table entries on QFX5110 switches.

Table 2: Unified Forwarding Table Profiles on QFX5110 Switches

Profile NameMAC TableHost Table (unicast and multicast addresses)
 MAC AddressesIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)

l2-profile-one

288K

16K

8K

8K

8K

4K

4K

l2-profile-two

224K

80K

40K

40K

40K

20K

20K

l2-profile-three (default)

160K

144K

72K

72K

72K

36K

36K

l3-profile

96K

208K

104K

104K

104K

52K

52K

Table 3 lists the LPM table size variations for the QFX5110 switch depending on the prefix entries.

Table 3: LPM Table Size Variations on QFX5110 Switches

Profile Name

Prefix Entries

num-65-127-prefixIPv4 LPM<= /32IPv6 LPM <= /64IPv6 LPM > /64

0

16K

8K

0K

1

12K

6K

1K

2

8K

4K

2K

3

4K

2K

3K

4

0K

0K

4K

Table 4 lists the profiles you can choose and the associated maximum values for the MAC address and host table entries on QFX5200-32C switches.

Table 4: Unified Forwarding Table Profiles on QFX5200-32C Switches

Profile NameMAC TableHost Table (unicast and multicast addresses)

 MAC AddressesIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)Exact-Match

l2-profile-one

136K

8K

4K

4K

4K

2K

2K

0

l2-profile-two

104K

40K

20K

20K

20K

10K

10K

0

l2-profile-three (default)

72K

72K

36K

36K

36K

18K

18K

0

l3-profile

40K

104K

52K

52K

52K

26K

26K

0

lpm-profile

8K

8K

4K

4K

4K

2K

2K

0

Table 5 lists the profiles you can choose and the associated maximum values for the MAC address and host table entries on QFX5200-48Y switches.

Table 5: Unified Forwarding Table Profiles on QFX5200-48Y Switches

Profile NameMAC TableHost Table (unicast and multicast addresses)

 MAC AddressesIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)

l2-profile-one

136K

8K

4K

4K

4K

2K

2K

l2-profile-two

104K

40K

20K

20K

20K

10K

10K

l2-profile-three (default)

72K

72K

36K

36K

36K

18K

18K

l3-profile

40K

104K

52K

52K

52K

26K

26K

lpm-profile

8K

8K

4K

4K

4K

2K

2K

Table 6 lists the LPM table size variations for the QFX5200-48Y switch depending on the prefix entries.

Table 6: LPM Table Size Variations on QFX5200-48Y Switches

Profile Name

Prefix Entries

num-65-127-prefixIPv4 LPM<= /32IPv6 LPM <= /64IPv6 LPM > /64

0

16K

8K

0K

1

12K

6K

1K

2

8K

4K

2K

3

40K

2K

3K

4

0K

0K

4K

Table 7 lists the profiles you can choose and the associated maximum values for the MAC address and host table entries on QFX5210-64C switches.

Table 7: Unified Forwarding Table Profiles on QFX5210-64C Switches

Profile NameMAC TableHost Table (unicast and multicast addresses)

 MAC AddressesIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)Exact Match

l2-profile-one

264K

8K

4K

4K

4K

2K

2K

0K

l2-profile-two

200K

72K

36K

36K

36K

18K

18K

0K

l2-profile-three (default)

136K

136K

72K

72K

72K

36K

36K

0K

l3-profile

72K

200K

100K

100K

100K

50K

50K

0K

Table 8 lists the profiles you can choose and the associated maximum values for the MAC address and host table entries on QFX5120 and EX4650 switches.

Table 8: Unified Forwarding Table Profiles on QFX5120 and EX4650 Switches

Profile NameMAC TableHost Table (unicast and multicast addresses)

 MAC AddressesIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)

l2-profile-one

288K

16K

8K

8K

8K

4K

4K

l2-profile-two

224K

80K

40K

40K

40K

20K

20K

l2-profile-three (default)

160K

144K

72K

72K

72K

36K

36K

l3-profile

96K

208K

104K

104K

104K

52K

52K

Table 9 lists the LPM table size variations for the QFX5210-64C switch depending on the prefix entries.

Table 9: LPM Table Size Variations on QFX5210-64C Switches

Profile Name

Prefix Entries

num-65-127-prefixIPv4 LPM<= /32IPv6 LPM <= /64IPv6 LPM > /64

0

32K

16K

0K

1

28K

14K

1K

2

24K

12K

2K

3

20K

10K

3K

4

0K

0K

4K

Table 10 lists the Layer 3 Defip table size variations for the QFX5120 and EX4650 switches depending on the changing IPv6/128 prefix entries.

Table 10: LPM Table Size Variations on QFX5210-64C and EX4650 Switches

Profile Name

Prefix Entries

num-65-127-prefixIPv4 LPM<= /32IPv6 LPM <= /64IPv6 LPM > /64

0

32K

16K

0K

2

24K

12K

2K

4

16K

8K

4K

6

8K

4K

6K

8

0K

0K

8K

Understanding Ternary Content Addressable Memory (TCAM) and Longest Prefix Match Entries

You can further customize non-LPM profiles by configuring the space available for ternary content addressable memory (TCAM) to allocate more memory for longest prefix match entries. You can change the number of entries allocated to these IPv6 addresses, essentially allocating more or less space for LPM IPv4 entries with any prefix length or IPv6 entries with prefix lengths of 64 of shorter. For more information about how to change the default parameters of the TCAM memory space for LPM entries, see Configuring the Unified Forwarding Table on Switches.

Note

The option to adjust TCAM space is not supported on the longest prefix match (LPM) or custom profiles. However, for the LPM profile, you can configure TCAM space not to allocate any memory for IPv6 entries with prefix lengths of 65 or longer, thereby allocating that memory space only for IPv4 routes or IP routes with prefix lengths equal to or less than 64 or a combination of the two types of prefixes.

Note

Starting with Junos OS Release 18.1R1 on QFX5210 switches, you can configure TCAM space to allocate a maximum of 8,000 IPv6 entries with prefix lengths of 65 or longer. The default value is 2,000 entries. Starting with Junos OS Release 13.2X51-D15, you can configure TCAM space to allocate a maximum of 4,000 IPv6 entries with prefix lengths of 65 or longer. The default value is 1,000 entries. Previous to Junos OS Release 13.2X51-D15, you could allocate only a maximum of 2,048 entries for IPv6 the IPv6 prefixes with lengths in the range /65 to /127 range. The default value was 16 entries for these types of IPv6 prefixes.

On Junos OS Releases 13.2x51-D10 and 13.2x52D10, the procedure to change the default value of 16 entries differs from later releases, where the maximum and default values are higher. For more information about that procedure, see Configuring the Unified Forwarding Table on Switches

Host Table Example for Profile with Heavy Layer 2 Traffic

Table 11 lists various valid combinations that the host table can store if you use the l2-profile-one profile on QFX5100 and EX4600 switches. This profile allocates the percentage of memory to Layer 2 addresses. Note that the default values might be different on other switches. Each row in the table represents a case in which the host table is full and cannot accommodate any more entries.

Table 11: Example Host Table Combinations Using l2-profile-one on QFX5100 and EX4600 Switches

IPv4 unicastIPv6 unicastIPv4 multicast

(*, G)
IPv4 multicast

(S, G)
IPv6 multicast

(*, G)
IPv6 multicast

(S, G)

16K

0

0

0

0

0

12K

2K

0

0

0

0

12K

0

2K

2K

0

0

8K

4K

0

0

0

0

4K

2K

2K

2K

0

0

0

4K

0

0

1K

1K

Example: Configuring a Unified Forwarding Table Custom Profile

Traditionally, forwarding tables have been statically defined and have supported only a fixed number of entries for each type of address. The Unified Forwarding Table (UFT) feature enables you to optimize how forwarding-table memory is allocated to best suit the needs of your network. This example shows how to configure a Unified Forwarding Table profile that enables you to partition four shared hash memory banks among three different types of forwarding-table entries: MAC addresses, Layer 3 host addresses, and longest prefix match (LPM).

The UFT feature also supports five profiles that each allocate a specific maximum amount of memory for each type of forwarding table entry. Some profiles allocate more memory to Layer 2 entries, while other profiles allocate more memory to Layer 3 or LPM entries. The maximum values for each type of entry are fixed in these profiles. With the custom profile, you can designate one or more shared memory banks to store a specific type of forwarding-table entry. You can configure as few as one or as many as four memory banks in a custom profile. The custom profile thus provides even more flexibility in enabling you to allocate forwarding-table memory for specific types of entries.

Requirements

This example uses the following hardware and software components:

  • One QFX5200 switch

  • Junos OS Release 15.1x53-D30 or later.

Before you configure a custom profile, be sure you have:

  • Configured interfaces

Overview

The Unified Forwarding Table custom profile enables you to allocate forwarding-table entries among four banks of shared hash tables with a total memory equal to 128,000 unicast IPv4 addresses, or 32,000 entries for each bank. Specifically, you can allocate one or more of these shared banks to store a specific type of forwarding-table entry. The custom profile does not affect the dedicated hash tables. Those tables remain fixed with 8,000 entries allocated to Layer 2 addresses, the equivalent of 8,000 entries allocated to IPv4 addresses, and the equivalent of 16,000 entries allocated to longest prefix match (LPM) addresses.

In this example, you allocate two memory banks to Layer 3 host addresses, and two memory banks to LPM entries. This means that no shared hash table memory is allocated for Layer 2 addresses. Only the dedicated hash table memory is allocated for Layer 2 addresses in this scenario.

Configuration

To configure a custom profile for the Unified Forwarding Table feature on a QFX5200 switch that allocates two shared memory banks for Layer 3 host address and two shared memory banks for LPM entries, perform these tasks:

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, copy and paste the commands into the CLI at the [edit] hierarchy level, and then enter commit from configuration mode. A commit check is performed to ensure that you have allocated forwarding-table space for no more than four memory banks.

Caution

When you configure and commit a profile, the Packet Forwarding Engine restarts and all the data interfaces on the switch go down and come back up.

Configuring the Custom Profile

Step-by-Step Procedure

To create the custom profile:

  1. Specify the custom-profile option.

Configuring the Allocation of Shared Memory Banks

Step-by-Step Procedure

To allocate memory for specific types of entries for the shared memory banks:

  1. Specify to allocate no shared bank memory for Layer 2 entries.
  2. Specify to allocate two shared memory banks (or the equivalent of 64,000 IPv4 entries) for Layer 3 host entries.
  3. Specify to allocate two shared memory banks (or the equivalent of 64,000 IPv4 entries) for LPM entries.

Results

From configuration mode, confirm your configuration by entering the show chassis forwarding-options command. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration.

If you are done configuring the switch, enter commit from configuration mode

Caution

The Packet Forwarding Engine will restart and all the data interfaces on the switch will go down and come back up.

Verification

Confirm that the configuration is working properly.

Checking the Parameters of the Custom Profile

Purpose

Verify that the custom profile is enabled.

Action

Meaning

The output shows that the custom profile is enabled as configured with two shared memory banks designated for Layer 3 host entries; two shared memory banks designated for LPM entries; and no shared memory allocated for Layer 2 entries.

The total scale(K) field shows the total allocation of memory, that is, the amount allocated through the shared memory banks plus the amount allocated through the dedicated hash tables. The amount allocated through the dedicated hash tables is fixed and cannot be changed. Therefore, Layer 2 entries have 8K of memory allocated only through the dedicated hash table. Layer 3 host entries have 64K of memory allocated through two shared memory banks plus 8K through the dedicated hash table, for a total of 72K of memory. LPM entries have 64K of memory allocated through two shared memory banks plus 16K through the dedicated hash table, for a total of 80K of memory.

Configuring the Unified Forwarding Table on Switches

Traditionally, forwarding tables have been statically defined and have supported only a fixed number of entries for each type of address stored in the tables. The Unified Forwarding Table feature lets you optimize how your switch allocates forwarding-table memory for different types of addresses. You can choose one of five unified forwarding table profiles. Each profile allocates a different maximum amount of memory for Layer 2, Layer 3 host, and longest prefix match (LPM) entries. In addition to selecting a profile, you can also select how much additional memory to allocate for LPM entries.

Two profiles allocate higher percentages of memory to Layer 2 addresses. A third profile allocates a higher percentage of memory to Layer 3 host address, while a fourth profile allocates a higher percentage of memory to LPM entries. There is a default profile configured that allocates an equal amount of memory to Layer 2 and Layer 3 host addresses with the remainder allocated to LPM entries. For a switch in a virtualized network that handles a great deal of Layer 2 traffic, you would choose a profile that allocates a higher percentage of memory to Layer 2 addresses. For a switch that operates in the core of the network, you would choose a profile that allocates a higher percentage of memory to LPM entries.

On QFX5200 and QFX5210-64C switches only, you can also configure a custom profile that allows you to partition shared memory banks among the different types of forwarding table entries. On QFX5200 switches, these shared memory banks have a total memory equal to 128,000 IPv4 unicast addresses. On QFX5210 switches, these shared memory banks have a total memory equal to 256,000 IPv4 unicast addresses. For more information about configuring the custom profile, see Example: Configuring a Unified Forwarding Table Custom Profile.

Configuring a Unified Forwarding Table Profile

To configure a unified forwarding table profile:

Specify a forwarding-table profile.

For example, to specify the profile that allocates the highest percentage of memory to Layer 2 traffic:

Caution

When you configure and commit a profile, in most cases the Packet Forwarding Engine automatically restarts and all the data interfaces on the switch go down and come back up (the management interfaces are unaffected).

Starting with Junos OS Releases 14.1X53-D40, 15.1R5, and 16.1R3, for a Virtual Chassis or Virtual Chassis Fabric (VCF) comprised of EX4600 or QFX5100 switches, the Packet Forwarding Engine in member switches does not automatically restart upon configuring and committing a unified forwarding table profile change. This behavior avoids Virtual Chassis or VCF instability after the change propagates to member switches and multiple Packet Forwarding Engines automatically restart at the same time. Instead, a message is displayed at the CLI prompt and logged to the switch’s system log to notify you that the profile change does not take effect until the next time you reboot the Virtual Chassis or VCF. We recommend that you plan to make profile changes only when you can perform a Virtual Chassis or VCF system reboot immediately after committing the configuration update. Otherwise, the Virtual Chassis or VCF could become inconsistent if one or more members have a problem and restart with the new configuration before a planned system reboot activates the change on all members.

Note

You can configure only one profile for the entire switch.

Note

The l2-profile-three is configured by default.

Note

If the host table stores the maximum number of entries for any given type, the entire table is full and is unable to accommodate any entries of any other type. Keep in mind that an IPv6 unicast address occupies twice as much memory as an IPv4 unicast address, and an IPv6 multicast address occupies four times as much memory as an IPv4 unicast address..

Configuring the Memory Allocation for Longest Prefix Match Entries

In addition to choosing a profile, you can further optimize memory allocation for longest prefix match (LPM) entries by configuring how many IPv6 prefixes to store with lengths from /65 through /127. The switch uses LPM entries during address lookup to match addresses to the most-specific (longest) applicable prefix. Prefixes of this type are stored in the space for ternary content addressable memory (TCAM). Changing the default parameters makes this space available for LPM entries. Increasing the amount of memory available for these IPv6 prefixes reduces by the same amount how much memory is available to store IPv4 unicast prefixes and IPv6 prefixes with lengths equal to or less than 64.

The procedures for configuring the LPM table are different, depending on which version of Junos OS you are using. In the initial releases that UFT is supported, Junos OS Releases 13.2X51-D10 and 13.2X52-10, you can only increase the amount of memory allocated to IPv6 prefixes with lengths from /65 through /127 for any profile, except for lpm-profile. Starting with Junos OS Release 13.2X51-D15, you can also allocate either less or no memory for IPv6 prefixes with lengths in the range /65 through /127, depending on which profile is configured. For the lpm-profie, however, the only change you can make to the default parameters is to allocate no memory for these types of prefixes.

Configuring the LPM Table With Junos OS Releases 13.2X51-D10 and 13.2X52-D10

In Junos OS Releases 13.2x51-D10 and 13.2X52-D10, by default, the switch allocates memory for 16 IPv6 with prefixes with lengths in the range /65 through /127. You can configure the switch to allocate more memory for IPv6 prefixes with lengths in the range /65 through /127.

To allocate more memory for IPv6 prefixes in the range /65 through /127:

  1. Choose a forwarding table profile.

    For example, to specify the profile that allocates the highest percentage of memory to Layer 2 traffic:

  2. Select how much memory to allocate for IPv6 prefixes in the range /65 thorugh 127.

    For example, to specify to allocate memory for 32 IPv6 prefixes in the range /65 through 127:

Note

When you configure and commit the num-65-127-prefix number statement, all the data interfaces on the switch restart. The management interfaces are unaffected.

The num-65-127-prefix number statement is not supported on the lpm-profile.

Configuring the LPM Table With Junos OS Release 13.2x51-D15 and Later

Configuring Layer 2 and Layer 3 Profiles With Junos OS Release 13.2x51-D15 or Later

Starting in Junos OS Release 13.2X51-D15, you can configure the switch to allocate forwarding table memory for as many as 4,000 IPv6 prefixes with lengths in the range /65 through /127 for any profile other than the lpm-profile or custom-profile. You can also specify to allocate no memory for these IPv6 entries. The default is 1,000 entries for IPv6 prefixes with lengths in the range /65 through /127. Previously, the maximum you could configure was for 2,048 entries for IPv6 prefixes with lengths in the range /65 through /127. The minimum number of entries was previously 16, which was the default.

To specify how much forwarding table memory to allocate for IPv6 prefixes with length in the range /65 through /127:

  1. Choose a forwarding table profile.

    For example, to specify the profile that allocates the highest percentage of memory to Layer 2 traffic:

  2. Select how much memory to allocate for IPv6 prefixes in the range /65 thorugh 127.

    For example, to specify to allocate memory for 2,000 IPv6 prefixes in the range /65 through 127:

Starting with Junos OS Release 13.2X51-D15, you can use the num-65-127-prefix statement to allocate entries. Table 12 shows the numbers of entries that you can allocate. Each row represents a case in which the table is full and cannot accommodate any more entries.

Table 12: LPM Table Combinations for L2 and L3 profiles With Junos OS 13.2X51-D15 and Later

num-65-127-prefix ValueIPv4 EntriesIPv6 Entries (Prefix <= 64)IPv6 Entries (Prefix >= 65)

0

16K

8K

0K

1 (default)

12K

6K

1K

2

8K

4K

2K

3

4K

2K

3K

4

0K

0K

4K

Caution

When you configure and commit a profile change with the num-65-127-prefix number statement, the Packet Forwarding Engine automatically restarts and all the data interfaces on the switch go down and come back up (the management interfaces are unaffected).

However, starting with Junos OS Releases 14.1X53-D40, 15.1R5, and 16.1R3, Packet Forwarding Engines on switches in a Virtual Chassis or Virtual Chassis Fabric (VCF) do not automatically restart upon configuring a unified forwarding table profile change. This behavior avoids Virtual Chassis or VCF instability after the change propagates to member switches and multiple Packet Forwarding Engines automatically restart at the same time. Instead, a message is displayed at the CLI prompt and logged to the switch’s system log to notify you that the profile change does not take effect until the next time you reboot the Virtual Chassis or VCF. We recommend that you plan to make profile changes only when you can perform a Virtual Chassis or VCF system reboot immediately after committing the configuration update. Otherwise, the Virtual Chassis or VCF could become inconsistent if one or more members have a problem and restart with the new configuration before a planned system reboot activates the change on all members.

Configuring the lpm-profile With Junos OS Release 13.2x51-D15 and Later

Starting with Junos OS Release 13.2X51-D15 you can configure the lpm-profile profile not to allocate any memory for IPv6 entries with prefix lengths from /65 through /127. These are the default maximum values allocated for LPM memory for the lpm-profile by address type:

  • 128K of IPv4 prefixes

  • 16K of IPv6 prefixes (all lengths)

Note

The memory allocated for each address type represents the maximum default value for all LPM memory.

To configure the lpm-profile not to allocate forwarding-table memory for IPv6 entries with prefixes from /65 through /127, thus allocating more memory for IPv4:

Specify to disable forwarding-table memory for IPv6 prefixes with lengths in the range /65 through /127.

For example, on the QFX5100 and EX4600 switches only, if you use the prefix-65-127-disable option, each of the following combinations are valid:

  • 100K IPv4 and 28K IPv6 /64 or shorter prefixes.

  • 64K IPv4 and 64K IPv6 /64 or shorter prefixes.

  • 128K IPv4 and 0K IPv6 /64 or shorter prefixes.

  • 0K IPv4 and 128K IPv6 /64 or shorter prefixes.

Note

On the QFX5200 switches, when you configure the prefix-65-127-disable statement, the maximum number of IPv6 entries with prefixes equal to or shorter than 64 is 98,000.

Configuring the lpm-profile With Junos OS Release 14.1x53-D30 and Later

Starting in Junos OS Release 15.1X53-D30, you can configure the lpm-profile profile to store unicast IPv4 and IPv6 host addresses in the LPM table , thereby freeing memory in the host table. Unicast IPv4 and IPv6 addresses are stored in the LPM table instead of the host table, as shown in Table 13 for QFX5100 and EX4600 switches. (Platform support depends on the Junos OS release in your installation.) You can use this option in conjunction with the option to allocate no memory in the LPM table for IPv6 entries with prefix lengths in the range /65 through /127. Together, these options maximize the amount of memory available for IPv4 unicast entries and IPv6 entries with prefix lengths equal to or less than 64.

Table 13: lpm-profile with unicast-in-lpm Option for QFX5100 and EX4600 Switches

prefix-65-

127-disable
MAC TableHost Table (multicast addresses)LPM Table unicast addresses)
 MACIPv4 unicastIPv6 unicastIPv4 (*, G)IPv4 (S, G)IPv6 (*, G)IPv6 (S, G)IPv4 unicastIPv6 unicast (</65)IPv6 unicast (>/64)

No

32K

0

0

8K

8K

4K

4K

128K

16K

16K

Yes

32K

0

0

8K

8K

4K

4K

128K

128K

0

Starting with Junos Release 18.1R1, you cannot set configure a prefix for the num-65-127-prefix statement on non-LPM profiles. You can only enable or disable the prefix-65-127-disable statement for the lpm-profile.

Table 14 lists the situations in which the prefix-65-127-disable statement should be enabled or disabled.

Table 14: LPM Table Size Variations on QFX5200-48Y Switches

Profile Name

Prefix Entries

num-65-127-prefixIPv4 <= /32IPv6 <= /64IPv6 > /64

Enabled

> 128K (minimum guaranteed)

98K

0K

Disabled

128K

16K

16K

On QFX5120 and EX4600 switches, you cannot set configure a prefix for the num-65-127-prefix statement on non-LPM profiles. You can only enable or disable the prefix-65-127-disable statement for the lpm-profile

Table 15 lists the situations in which the prefix-65-127-disable statement should be enabled or disabled.

Table 15: LPM Table Size Variations on QFX5120 and EX4650 Switches

Profile Name

Prefix Entries

prefix-65-127-disableIPv4 <= /32IPv6 <= /64IPv6 > /64

Enabled

351K (360,000 approximate)

168K (172,000 approximate)

0K

Disabled

168K (172,000 approximate)

64K (65,524 approximate)

64K (65,524 approximate)

Note that all entries in each table share the same memory space. If a table stores the maximum number of entries for any given type, the entire shared table is full and is unable to accommodate any entries of any other type. For example, if you use the the unicast-in-lpm option and there are 128K IPv4 unicast addresses stored in the LPM table, the entire LPM table is full and no IPv6 addresses can be stored. Similarly, if you use the unicast-in-lpm option but do not use the prefix-65-127-disable option, and 16K IPv6 addresses with prefixes shorter than /65 are stored, the entire LPM table is full and no additional addresses (IPv4 or IPv6) can be stored.

To configure the lpm-profile to store unicast IPv4 entries and IPv6 entries with prefix lengths equal to or less than 64 in the LPM table:

  1. Specify the option to store these entries in the LPM table.
  2. (Optional) Specify to allocate no memory for in the LPM table for IPv6 prefixes with length in the range /65 through /127:
Configuring Non-LPM Profiles on QFX5120 and EX4650 Switches

For non-LPM profiles, each profile provides the option of reserving a portion of the 16K L3-defip table to store IPv6 Prefixes > 64. Because these are 128-bit prefixes, you can have maximum of 8k IPv6/128 entries in the l3-defip table.

  1. Choose a forwarding table profile.

    For example, to specify the profile that allocates the highest percentage of memory to Layer 3 traffic:

  2. Select how much memory to allocate for IPv6 prefixes in the range /65 thorugh 127.

    For example, to specify to allocate memory for 2,000 IPv6 prefixes in the range /65 through 127:

    You can choose between 0 and 4, 1 being the default.

Configuring Forwarding Mode on Switches

By default, packets packets are forwarded using store-and-forward mode. You can configure all the interfaces to use cut-through mode instead.

To enable cut-through switching mode, enter the following statement:

[edit forwarding-options]

user@switch# set cut-through

See also

Disabling Layer 2 Learning and Forwarding

Disabling dynamic MAC learning on an MX Series router or an EX Series switch prevents all the logical interfaces on the router or switch from learning source and destination MAC addresses.

To disable MAC learning for an MX Series router or an EX Series switch, include the global-no-mac-learning statement at the [edit protocols l2-learning] hierarchy level:

For information about how to configure a virtual switch, see Configuring a Layer 2 Virtual Switch .

Release History Table
Release
Description
Starting with Junos OS Release 18.1R1 on the QFX5210-64C switch, for all these profiles, except for the lpm-profile the longest prefix match (LPM) table size is equal to 32,000 IPv4 entries.
Starting with Junos OS Release 18.3R1 on the QFX5120 and EX4650 switches, for all these profiles, except for the lpm-profile the longest prefix match (LPM) table size is equal to 32,000 IPv4 entries.
Starting with Junos OS Release 18.1R1 on QFX5210 switches, you can configure TCAM space to allocate a maximum of 8,000 IPv6 entries with prefix lengths of 65 or longer. The default value is 2,000 entries.
Starting with Junos Release 18.1R1, you cannot set configure a prefix for the num-65-127-prefix statement on non-LPM profiles. You can only enable or disable the prefix-65-127-disable statement for the lpm-profile.
Starting with Junos OS Releases 14.1X53-D40, 15.1R5, and 16.1R3, for a Virtual Chassis or Virtual Chassis Fabric (VCF) comprised of EX4600 or QFX5100 switches, the Packet Forwarding Engine in member switches does not automatically restart upon configuring and committing a unified forwarding table profile change.
Starting with Junos OS Release 13.2X51-D15, you can configure TCAM space to allocate a maximum of 4,000 IPv6 entries with prefix lengths of 65 or longer. The default value is 1,000 entries.
Starting with Junos OS Release 13.2X51-D15, you can also allocate either less or no memory for IPv6 prefixes with lengths in the range /65 through /127, depending on which profile is configured.
Starting in Junos OS Release 13.2X51-D15, you can configure the switch to allocate forwarding table memory for as many as 4,000 IPv6 prefixes with lengths in the range /65 through /127 for any profile other than the lpm-profile or custom-profile.
Starting with Junos OS Release 13.2X51-D15, you can use the num-65-127-prefix statement to allocate entries.