Example: Configuring Any-Source Draft-Rosen 6 Multicast VPNs

 

Understanding Any-Source Multicast

Any-source multicast (ASM) is the form of multicast in which you can have multiple senders on the same group, as opposed to source-specific multicast where a single particular source is specified. The original multicast specification, RFC 1112, supports both the ASM many-to-many model and the SSM one-to-many model. For ASM, the (S,G) source, group pair is instead specified as (*,G), meaning that the multicast group traffic can be provided by multiple sources.

An ASM network must be able to determine the locations of all sources for a particular multicast group whenever there are interested listeners, no matter where the sources might be located in the network. In ASM, the key function of source discovery is a required function of the network itself.

In an environment where many sources come and go, such as for a video conferencing service, ASM is appropriate. Multicast source discovery appears to be an easy process, but in sparse mode it is not. In dense mode, it is simple enough to flood traffic to every router in the network so that every router learns the source address of the content for that multicast group.

However, in PIM sparse mode, the flooding presents scalability and network resource use issues and is not a viable option.

Example: Configuring Any-Source Multicast for Draft-Rosen VPNs

This example shows how to configure an any-source multicast VPN (MVPN) using dual PIM configuration with a customer RP and provider RP and mapping the multicast routes from customer to provider (known as draft-rosen). The Junos OS complies with RFC 4364 and Internet draft draft-rosen-vpn-mcast-07.txt, Multicast in MPLS/BGP VPNs.

Requirements

Before you begin:

Overview

Draft-rosen multicast virtual private networks (MVPNs) can be configured to support service provider tunnels operating in any-source multicast (ASM) mode or source-specific multicast (SSM) mode.

In this example, the term multicast Layer 3 VPNs is used to refer to draft-rosen MVPNs.

This example includes the following settings.

  • interface lo0.1—Configures an additional unit on the loopback interface of the PE router. For the lo0.1 interface, assign an address from the VPN address space. Add the lo0.1 interface to the following places in the configuration:

    • VRF routing instance

    • PIM in the VRF routing instance

    • IGP and BGP policies to advertise the interface in the VPN address space

    In multicast Layer 3 VPNs, the multicast PE routers must use the primary loopback address (or router ID) for sessions with their internal BGP peers. If the PE routers use a route reflector and the next hop is configured as self, Layer 3 multicast over VPN will not work, because PIM cannot transmit upstream interface information for multicast sources behind remote PEs into the network core. Multicast Layer 3 VPNs require that the BGP next-hop address of the VPN route match the BGP next-hop address of the loopback VRF instance address.

  • protocols pim interface—Configures the interfaces between each provider router and the PE routers. On all CE routers, include this statement on the interfaces facing toward the provider router acting as the RP.

  • protocols pim mode sparse—Enables PIM sparse mode on the lo0 interface of all PE routers. You can either configure that specific interface or configure all interfaces with the interface all statement. On CE routers, you can configure sparse mode or sparse-dense mode.

  • protocols pim rp local—On all routers acting as the RP, configure the address of the local lo0 interface. The P router acts as the RP router in this example.

  • protocols pim rp static—On all PE and CE routers, configure the address of the router acting as the RP.

    It is possible for a PE router to be configured as the VPN customer RP (C-RP) router. A PE router can also act as the DR. This type of PE configuration can simplify configuration of customer DRs and VPN C-RPs for multicast VPNs. This example does not discuss the use of the PE as the VPN C-RP.

    Figure 1 shows multicast connectivity on the customer edge. In the figure, CE2 is the RP router. However, the RP router can be anywhere in the customer network.

    Figure 1: Multicast Connectivity on the CE Routers
    Multicast Connectivity on the CE Routers
  • protocols pim version 2—Enables PIM version 2 on the lo0 interface of all PE routers and CE routers. You can either configure that specific interface or configure all interfaces with the interface all statement.

  • group-address—In a routing instance, configure multicast connectivity for the VPN on the PE routers. Configure a VPN group address on the interfaces facing toward the router acting as the RP.

    The PIM configuration in the VPN routing and forwarding (VRF) instance on the PE routers needs to match the master PIM instance on the CE router. Therefore, the PE router contains both a master PIM instance (to communicate with the provider core) and the VRF instance (to communicate with the CE routers).

    VRF instances that are part of the same VPN share the same VPN group address. For example, all PE routers containing multicast-enabled routing instance VPN-A share the same VPN group address configuration. In Figure 2, the shared VPN group address configuration is 239.1.1.1.

    Figure 2: Multicast Connectivity for the VPN
    Multicast Connectivity for the VPN
  • routing-instances instance-name protocols pim rib-group—Adds the routing group to the VPN's VRF instance.

  • routing-options rib-groups—Configures the multicast routing group.

This example describes how to configure multicast in PIM sparse mode for a range of multicast addresses for VPN-A as shown in Figure 3.

Figure 3: Customer Edge and Service Provider Networks
Customer Edge and Service Provider
Networks

Configuration

CLI Quick Configuration

To quickly configure this example, copy the following commands, paste them into a text file, remove any line breaks, change any details necessary to match your network configuration, and then copy and paste the commands into the CLI at the [edit] hierarchy level.

PE1

Step-by-Step Procedure

The following example requires you to navigate various levels in the configuration hierarchy. For information about navigating the CLI, see Using the CLI Editor in Configuration Mode in the CLI User Guide.

To configure multicast for draft-rosen VPNs:

  1. Configure PIM on the P router.

  2. Configure PIM on the PE1 and PE2 routers. Specify a static RP—the P router (10.255.71.47).
  3. Configure PIM on CE1. Specify the RP address for the VPN RP—Router CE2 (10.255.245.91).
  4. Configure PIM on CE2, which acts as the VPN RP. Specify CE2's address (10.255.245.91).
  5. On PE1, configure the routing instance (VPN-A) for the Layer 3 VPN.
  6. On PE1, configure the IGP policy to advertise the interfaces in the VPN address space.
  7. On PE1, set the RP configuration for the VRF instance. The RP configuration within the VRF instance provides explicit knowledge of the RP address, so that the (*,G) state can be forwarded.
  8. On PE1, configure the loopback interfaces.
  9. As you did for the PE1 router, configure the PE2 router.
  10. When one of the PE routers is running Cisco Systems IOS software, you must configure the Juniper Networks PE router to support this multicast interoperability requirement. The Juniper Networks PE router must have the lo0.0 interface in the master routing instance and the lo0.1 interface assigned to the VPN routing instance. You must configure the lo0.1 interface with the same IP address that the lo0.0 interface uses for BGP peering in the provider core in the master routing instance.

    Configure the same IP address on the lo0.0 and lo0.1 loopback interfaces of the Juniper Networks PE router at the [edit interfaces lo0] hierarchy level, and assign the address used for BGP peering in the provider core in the master routing instance. In this alternate example, unit 0 and unit 1 are configured for Cisco IOS interoperability.

  11. Configure the multicast routing table group. This group accesses inet.2 when doing RPF checks. However, if you are using inet.0 for multicast RPF checks, this step will prevent your multicast configuration from working.
  12. Activate the multicast routing table group in the VPN's VRF instance.
  13. If you are done configuring the device, commit the configuration.

Results

Confirm your configuration by entering the show interfaces, show protocols, show routing-instances, and show routing-options commands from configuration mode. If the output does not display the intended configuration, repeat the instructions in this example to correct the configuration. This output shows the configuration on PE1.

Verification

To verify the configuration, run the following commands:

  1. Display multicast tunnel information and the number of neighbors by using the show pim interfaces instance instance-name command from the PE1 or PE2 router. When issued from the PE1 router, the output display is:
    user@host> show pim interfaces instance VPN-A

    You can also display all PE tunnel interfaces by using the show pim join command from the provider router acting as the RP.

  2. Display multicast tunnel interface information, DR information, and the PIM neighbor status between VRF instances on the PE1 and PE2 routers by using the show pim neighbors instance instance-name command from either PE router. When issued from the PE1 router, the output is as follows:
    user@host> show pim neighbors instance VPN-A

Load Balancing Multicast Tunnel Interfaces Among Available PICs

When you configure multicast on draft-rosen Layer 3 VPNs, multicast tunnel interfaces are automatically generated to encapsulate and de-encapsulate control and data traffic.

To generate multicast tunnel interfaces, a routing device must have one or more of the following tunnel-capable PICs:

  • Adaptive Services PIC

  • Multiservices PIC or Multiservices DPC

  • Tunnel Services PIC

  • On MX Series routers, a PIC created with the tunnel-services statement at the [edit chassis fpc slot-number pic number] hierarchy level

Note

A routing device is a router or an EX Series switch that is functioning as a router.

If a routing device has multiple such PICs, it might be important in your implementation to load balance the tunnel interfaces across the available tunnel-capable PICs.

The multicast tunnel interface that is used for encapsulation, mt-[xxxxx], is in the range from 32,768 through 49,151. The interface mt-[yyyyy], used for de-encapsulation, is in the range from 1,081,344 through 1,107,827. PIM runs only on the encapsulation interface. The de-encapsulation interface populates downstream interface information. For the default MDT, an instance’s de-encapsulation and encapsulation interfaces are always created on the same PIC.

For each VPN, the PE routers build a multicast distribution tree within the service provider core network. After the tree is created, each PE router encapsulates all multicast traffic (data and control messages) from the attached VPN and sends the encapsulated traffic to the VPN group address. Because all the PE routers are members of the outgoing interface list in the multicast distribution tree for the VPN group address, they all receive the encapsulated traffic. When the PE routers receive the encapsulated traffic, they de-encapsulate the messages and send the data and control messages to the CE routers.

If a routing device has multiple tunnel-capable PICs (for example, two Tunnel Services PICs), the routing device load balances the creation of tunnel interfaces among the available PICs. However, in some cases (for example, after a reboot), a single PIC might be selected for all of the tunnel interfaces. This causes one PIC to have a heavy load, while other available PICs are underutilized. To prevent this, you can manually configure load balancing. Thus, you can configure and distribute the load uniformly across the available PICs.

The definition of a balanced state is determined by you and by the requirements of your Layer 3 VPN implementation. You might want all of the instances to be evenly distributed across the available PICs or across a configured list of PICs. You might want all of the encapsulation interfaces from all of the instances to be evenly distributed across the available PICs or across a configured list of PICs. If the bandwidth of each tunnel encapsulation interface is considered, you might choose a different distribution. You can design your load-balancing configuration based on each instance or on each routing device.

Note

In a Layer 3 VPN, each of the following routing devices must have at least one tunnel-capable PIC:

  • Each provider edge (PE) router.

  • Any provider (P) router acting as the RP.

  • Any customer edge (CE) router that is acting as a source's DR or as an RP. A receiver's designated router does not need a tunnel-capable PIC.

To configure load balancing:

  1. On an M Series or T Series router or on an EX Series switch, install more than one tunnel-capable PIC. (In some implementations, only one PIC is required. Load balancing is based on the assumption that a routing device has more than one tunnel-capable PIC.)
  2. On an MX Series router, configure more than one tunnel-capable PIC.
  3. Configure Layer 3 VPNs as described in Example: Configuring Any-Source Multicast for Draft-Rosen VPNs.
  4. For each VPN, specify a PIC list.

    The physical position of the PIC in the routing device determines the multicast tunnel interface name. For example, if you have an Adaptive Services PIC installed in FPC slot 0 and PIC slot 0, the corresponding multicast tunnel interface name is mt-0/0/0. The same is true for Tunnel Services PICs, Multiservices PICs, and Multiservices DPCs.

    In the tunnel-devices statement, the order of the PIC list that you specify does not impact how the interfaces are allocated. An instance uses all of the listed PICs to create default encapsulation and de-encapsulation interfaces, and data MDT encapsulation interfaces. The instance uses a round-robin approach to distributing the tunnel interfaces (default and data MDT) across the PIC list (or across the available PICs, in the absence of a PIC list).

    For the first tunnel, the round-robin algorithm starts with the lowest-numbered PIC. The second tunnel is created on the next-lowest-numbered PIC, and so on, round and round. The selection algorithm works routing device-wide. The round robin does not restart at the lowest-numbered PIC for each new instance. This applies to both the default and data MDT tunnel interfaces.

    If one PIC in the list fails, new tunnel interfaces are created on the remaining PICs in the list using the round-robin algorithm. If all the PICs in the list go down, all tunnel interfaces are deleted and no new tunnel interfaces are created. If a PIC in the list comes up from the down state and the restored PIC is the only PIC that is up, the interfaces are reassigned to the restored PIC. If a PIC in the list comes up from the down state and other PICs are already up, an interface reassignment is not done. However, when a new tunnel interface needs to be created, the restored PIC is available for the selection process. If you include in the PIC list a PIC that is not installed on the routing device, the PIC is treated as if it is present but in the down state.

    To balance the interfaces among the instances, you can assign one PIC to each instance. For example, if you have vpn1-10 and you have three PICs—for example, mt-1/1/0, mt-1/2/0, mt-2/0/0—you can configure vpn1-4 to only use mt-1/1/0, vpn5-7 to use mt-1/2/0, and vpn8-10 to use mt-2/0/0.

  5. Commit the configuration.

    When you commit a new PIC list configuration, all the multicast tunnel interfaces for the routing instance are deleted and re-created using the new PIC list.

  6. If you reboot the routing device, some PICs come up faster than others. The difference can be minutes. Therefore, when the tunnel interfaces are created, the known PIC list might not be the same as when the routing device is fully rebooted. This causes the tunnel interfaces to be created on some but not all available and configured PICs. To remedy this situation, you can manually rebalance the PIC load.

    Check to determine if a load rebalance is necessary.

    user@host#> show interfaces terse | match mt-

    The output shows that mt-1/1/0 has only one tunnel encapsulation interface, while mt-1/2/0 has three tunnel encapsulation interfaces. In a case like this, you might decide to rebalance the interfaces. As stated previously, encapsulation interfaces are in the range from 32,768 through 49,151. In determining whether a rebalance is necessary, look at the encapsulation interfaces only, because the default MDT de-encapsulation interface always resides on the same PIC with the default MDT encapsulation interface.

  7. (Optional) Rebalance the PIC load.

    This command re-creates and rebalances all tunnel interfaces for a specific instance.

    This command re-creates and rebalances all tunnel interfaces for all routing instances.

  8. Verify that the PIC load is balanced.
    user@host#> show interfaces terse | match mt-

    The output shows that mt-1/1/0 has two encapsulation interfaces, and mt-1/2/0 also has two encapsulation interfaces.