Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Inline Static NAT over Layer 3 VPNs for Business Edge

 

About This Example

This example shows how a service provider can give enterprise employees on different networks access to cloud services by using inline NAT from their customers’ LANs through the service provider MPLS core to cloud services. The example consists of the following:

  • Three Customer Edge (CE) routers that originate traffic from the customer LANs to cloud services.

  • Three Provider Edge (PE) routers.

  • Cloud Services that could belong to the enterprise or to the service provider.

Figure 1: Inline NAT Network Overview
 Inline NAT Network Overview

Technology Overview

Figure 2 gives an overview of the technology used in this example.

Figure 2: Inline NAT Example Network Overview
Inline NAT Example Network
Overview

Routing Overview

The core is an MPLS core that uses:

  • RSVP as the signaling protocol that sets up end-to-end paths.

  • Label-switched path (LSP) tunnels between the PE routers.

  • EBGP to distribute routes from the CE routers and cloud services to PE routers.

  • Multiprotocol BGP (MP–BGP) to exchange routing information among the PE routers.

  • OSPF to provide reachability information in the core to allow BGP to resolve its next-hops.

Layer 3 VPN

A Layer 3 VPN is a set of sites that share common routing information and whose connectivity is controlled by policies. Layer 3 VPNs allow service providers to use their IP core to provide VPN services to their customers.

The type of Layer 3 VPN in this example is called BGP/MPLS VPN because BGP distributes VPN routing information across the provider’s core, and MPLS forwards VPN traffic across the core to the VPN sites.

There are four sites attached to the Layer 3 VPN in this example—three customer sites and one cloud services site. The Layer 3 VPN has a hub-and-spoke configuration. Routers PE1 and PE2 are the spokes, and they connect to the customer networks. PE3 is the hub, and it connects to cloud services.

Inline NAT

In an MX Series device, you can use inline NAT on MPC line cards. You do not need a dedicated services interface, such as an MS-MPC. Inline NAT is applied in the forwarding plane, similar to the way firewalls and policers are handled in the Junos OS. Inline NAT runs on services inline (si) interfaces that are based on the FPC and PIC.

Because packets do not need to be sent to a services card for processing, the MX Series router can achieve line rate, low latency NAT translations. While Inline NAT services provide better performance than using a services card, their functionality is more basic; inline NAT supports only static NAT.

There are two types of inline NAT:

  • Interface-style—an interface-based method, where packets arriving at an interface are sent through a service set. You use interface-style NAT to apply NAT to all traffic on an interface.

  • Next-hop-style—a route-based method that is typically used when routing instances forward packets from a specific network or destined for a specific destination. Routing instances move customer traffic to a service interface where NAT is applied to traffic that matches the route.

Both methods are used in this example.

Requirements

This example uses the following hardware and software components:

  • MX Series routers with Modular Port Concentrators (MPCs)

  • Junos OS Release 17.1R1 or higher

Configuring the Core

Core Overview

The core configuration consists of the physical and loopback interfaces and routing protocols. The routing protocol design includes:

  • RSVP is the signaling protocol that sets up end-to-end paths between PE1 and PE3 and between PE2 and PE3.

  • MPLS LSPs provide tunnels between PE1 and PE3 and between PE2 and PE3.

  • OSPF provides reachability information in the core to allow BGP to resolve its next-hops.

  • MP-BGP supports Layer 3 VPNs by allowing the PE routers to exchange information about routes originating and terminating in the VPNs.

Figure 3: • Core Interfaces and Routing
•Core Interfaces and Routing

Core Transport Signaling Design Considerations

The PE devices use LSPs between them to send customer traffic over the MPLS core. In this design, we considered the two most common signaling types to set up the end-to-end LSP paths—LDP and RSVP. We are using RSVP as the signaling protocol that sets up end-to-end paths.

In this example, MP-BGP distributes VPN routing information across the provider’s core, and MPLS forwards VPN traffic across the core to remote VPN sites.

Interior Gateway Protocol (IGP) Design Considerations

An IGP exchanges routing information within an autonomous system (AS). We are using OSPF as the IGP for the core network. We chose OSPF because it is easy to configure, does not require a large amount of planning, has flexible summarization and filtering, and can scale to large networks.

Configuring PE1

  1. Configure the core-facing physical interface and the loopback interface.
  2. Configure the core routing protocols on the core-facing interface (xe-0/0/2.0).
    • Enable RSVP.

    • Enable MPLS on the core-facing interface to allow MPLS to create an MPLS label for the interface.

    • Configure an MPLS LSP tunnel from PE1 to PE3.

    • Configure IBGP, and add PE3’s loopback address as a neighbor.

    • Configure OSPF, and add the core-facing interface and the loopback interface to area 0.

    We recommend adding the no-cspf statement to the MPLS configuration to disable constrained-path LSP computation. CSPF is enabled by default, but it is a best practice to turn off when it is not needed.

  3. Configure the autonomous system.
  4. Configure and apply per flow load balancing.

Configuring PE2

  1. Configure the core-facing physical interface and the loopback interface.
  2. Configure the core routing protocols on the core-facing interface (xe-0/0/0.0).
    • Enable RSVP.

    • Configure an MPLS LSP tunnel from PE2 to PE3.

    • Enable MPLS on the core-facing interface to allow MPLS to create an MPLS label for the interface.

    • Configure IBGP, and add PE3’s loopback address as a neighbor.

    • Configure OSPF, and add the core-facing interface and the loopback interface to area 0.

    We recommend adding the no-cspf statement to the MPLS configuration to disable constrained-path LSP computation. CSPF is enabled by default, but it is a best practice to turn off when it is not needed.

  3. Configure the autonomous system.
  4. Configure and apply per flow load balancing.

Configuring PE3

  1. Configure the core-facing physical interfaces and the loopback interface.
  2. Configure the core routing protocols on the core-facing interfaces (xe-0/0/0.0 and xe-0/0/1.0).
    • Enable RSVP.

    • Enable MPLS on the core-facing interface to allow MPLS to create an MPLS label for the interface.

    • Configure an MPLS LSP tunnel from PE3 to PE1 and from PE3 to PE2.

    • Configure IBGP, and add the PE1 and PE3 loopback addresses as neighbors.

    • Configure OSPF, and add the core-facing interface and the loopback interface to area 0.

    We recommend adding the no-cspf statement to the MPLS configuration to disable constrained-path LSP computation. CSPF is enabled by default, but it is a best practice to turn off when it is not needed.

  3. Configure the autonomous system.
  4. Configure and apply per flow load balancing.

Verifying Your Configuration

Commit and then verify the core configuration. The examples below show output from PE3.

  1. Verify that your physical interfaces are up.
  2. Verify OSPF neighbors.
  3. Verify BGP peers on PE1 and PE2.
  4. Show that neighbors are established in the IBGP group.
  5. Verify your RSVP sessions.
  6. Verify your MPLS LSP sessions.
  7. Check the status of MPLS label-switched paths (LSPs).
  8. To validate OSPF-level reachability in the core, from PE3, ping PE1.

Configuring the Layer 3 VPN on PE Routers

Layer 3 VPN Overview

We are using a Layer 3 VPN to separate and route traffic from each of the customer LANs and cloud services over the core. There are four sites in the VPN—the three customer LANs and cloud services.

To distinguish routes for the customer LANs and cloud services on the PE routers, we are using virtual routing and forwarding (VRF) routing instance . A VRF routing instance has one or more routing tables, a forwarding table, the interfaces that use the forwarding table, and the policies and routing protocols that control what goes into the forwarding table. The VRF tables are populated with routes received from the CE sites and cloud services, and with routes received from other PE routers in the VPN. Because each site has its own routing instance, each site has separate tables, rules, and policies.

This example uses a hub-and-spoke VPN configuration. Routers PE1 and PE2 are the spokes, and they represent the customer networks. PE3 is the hub, and it represents the cloud services. Policies mark traffic as either a hub or spoke, and the marking is used to direct traffic to the correct VRF routing instance.

Figure 4: Layer 3 VPN with Hub and Spokes
Layer 3 VPN with Hub and Spokes

Configuring PE1

  1. Configure the physical interfaces to Customer A and Customer B.
  2. Configure policies that we will use as VPN import and export policies in the router’s VRF routing instances.
    • CustA-to-CloudSvcs and CustB-to-CloudSvcs—These are export policies that add the Spoke tag when BGP exports routes that match the policies.

    • from-CloudSvcs—This is an import policy that adds received routes with the Hub tag to the VRF routing table.

  3. Configure VRF routing instances for Customer A and B. These routing instances create the following routing tables on PE1:
    • For the Customer A, the VRF table is Cust-A-VRF.inet.0.

    • For the customer B, the VRF table is Cust-B-VRF.inet.0.

    Each routing instance must contain:

    • Instance type of VRF, which creates the VRF routing table for the VPN on the PE router.

    • Interface connected to the customer CE device.

    • Route distinguisher, which must be unique for each routing instance on the PE router. It is used to distinguish the addresses in one VPN from those in another VPN.

    • VRF import policy that adds received routes with the Hub tag to the VRF routing table.

    • VRF export policy that adds the Spoke tag when BGP exports the route.

    • VRF table label that maps the inner label of a packet to a specific VRF table. This allows the examination of the encapsulated IP header. All routes in the VRF configured with this option are advertised with the label allocated per VRF.

  4. Add Layer 3 VPN support to the IBGP group.

Configuring PE2

  1. Configure the interface to Customer C.
  2. Configure policies that we will use as VPN import and export policies in the router’s VRF routing instance.
    • CustC-to-CloudSvcs—This is an export policy that adds the Spoke tag when BGP exports routes that match the policy.

    • from-CloudSvcs—This is an import policy that adds received routes with the Hub tag to the VRF routing table.

  3. Configure a VRF routing instance for Customer C that will create a routing table to forward packets within the VPN.

    For Cust-C, the VRF table is Cust-C.inet.0.

    The routing instance must contain:

    • Route distinguisher, which must be unique for each routing instance on the PE router. It is used to distinguish the addresses in one VPN from those in another VPN.

    • Instance type of VRF, which creates the VRF routing table for the VPN on the PE router.

    • Interface connected to CE3.

    • VRF import policy that adds received routes with the Hub tag to the VRF routing table.

    • VRF export policy that adds the Spoke tag when BGP exports the route.

    • VRF table label that maps the inner label of a packet to a specific VRF table. This allows the examination of the encapsulated IP header. All routes in the VRF configured with this option are advertised with the label allocated per VRF.

  4. Add Layer 3 VPN support to the IBGP group.

Configuring PE3

  1. Configure the physical interface to cloud services.
  2. Configure policies that we will use as VPN import and export policies in the router’s VRF routing instance.
    • to-Cust—This is an export policy that adds the Spoke tag when BGP exports routes that match the policy.

    • from-Cust—This is an import policy that adds received routes that have a Spoke tag to the VRF routing table.

  3. Configure a VRF routing instance that is used to create a routing table to forward packets within the VPN.

    For Cloud Services, the VRF table is CloudSvcs.inet.0.

    The routing instance must contain:

    • Route distinguisher, which must be unique for each routing instance on the PE router. It is used to distinguish the addresses in one VPN from those in another VPN.

    • Instance type of VRF, which creates the VRF table on the PE router.

    • Interfaces connected to the PE routers.

    • VRF import policy that adds received routes with a Spoke tag to the VRF routing table.

    • VRF export policy that adds the Spoke tag when BGP exports routes that match the policy.

  4. Add Layer 3 VPN support to the IBGP group that was configured previously.

Verifying Your Configuration

To verify your configuration, commit the configuration, and then do the following:

  1. From PE3, show neighbors in the IBGP group. Notice the addition of the bgp.l3vpn and the CloudSvcs routing tables.
    Note

    When a PE router receives a route from another PE router, it checks it against the import policy on the IBGP session between the PE routers. If it is accepted, the router places the route into its bgp.l3vpn.0 table. At the same time, the router checks the route against the VRF import policy for the VPN. If it matches, the route distinguisher is removed from the route and the route is placed into the appropriate VRF table (the routing-instance-name.inet.0 table).

  2. From PE3, verify BGP peers on PE1 and PE2. Again notice the addition of the bgp.l3vpn tables.
  3. From PE1, verify that the Cust-A-VRF routing instance is active.
  4. From PE1, verify the Cust-A-VRF.inet.0 routing table.

Configuring Connections from CE Routers and Cloud Services to PE Routers

Connections from CE Routers and Cloud Services to PE Routers Overview

We are using EBGP for routing between the CE routers and PE1 and PE2 and between cloud services and PE3. The CE routers use a routing policy that matches the address of the customer LAN. You apply this policy as an export policy in the EBGP peer, which causes EBGP to send these addresses to PE routers. The same configuration on the cloud services router causes its routes to be sent to PE3.

Figure 5: Connections to CE Routers and Cloud Services
Connections to CE Routers and Cloud Services

Configuring the Connection Between CE1 and PE1

In this example we are using the loopback interfaces on the router to represent the customer LANs. That is why the loopback interfaces use the IP addresses of the customer LAN.

Configuring CE1

  1. Configure the physical and loopback interfaces for CE1.
  2. Configure a routing policy that matches the address of the Customer A LAN.
  3. Configure an EBGP group for peering between CE1 and PE1. Apply the routing policy that matches the Customer A LAN as an export policy. BGP advertises the address in the policy to PE1, which redistributes the customer LAN routes into the VPN.
  4. Configure the autonomous system for the router.

Configuring PE1

  1. Add an EBGP group to the Cust-A-VRF routing instances for peering between PE1 and CE1.

Configuring the Connection Between CE2 and PE1

In this example we are using the loopback interfaces on the router to represent the customer LANs. That is why the loopback interfaces use the IP addresses of the customer LAN.

Configuring CE2

  1. Configure the physical and loopback interfaces for CE2.
  2. Configure a routing policy that matches the address of the Customer B LAN.
  3. Configure an EBGP group for peering between CE2 and PE1. Apply the routing policy that matches the Customer B LAN as an export policy. BGP advertises this address to the VPN network, which means that the customer LAN routes are distributed into the VPN.
  4. Configure the autonomous system.

Configuring PE1

  1. Add an EBGP group to the Cust-B-VRF routing instances for peering between PE1 and CE2.

Configuring the Connection Between CE3 and PE2

In this example we are using the loopback interfaces on the router to represent the customer LANs. That is why the loopback interfaces use the IP addresses of the customer LAN.

Configuring CE3

  1. Configure the physical and loopback interfaces for CE3.
  2. Configure a routing policy that matches the address of the Customer C LAN.
  3. Configure an EBGP group for peering between CE3 and PE2. Apply the routing policy that matches the Customer C LAN as an export policy. BGP advertises this address to the VPN network, which means that the customer LAN routes are distributed into the VPN.
  4. Configure the autonomous system.

Configuring PE2

  1. Add an EBGP group to the Cust-C routing instance for peering between PE2 and CE3.

Verifying Connections from CE Routers and Cloud Services

  1. Verify that the CE routers’ physical interfaces are up. For example:
  2. Verify connections from PE routers to CE routers. For example:
  3. Verify connection from PE3 to cloud services.
  4. On PE3, verify BGP peers. Cloud services (192.168.1.2) is now a BGP peer.
  5. On PE1, verify that CE1 and CE2 are now BGP peers.
  6. On the CE routers, verify the EBGP group. For example:

Configuring Inline NAT

Inline NAT Design Considerations

Inline NAT provides stateless address translation on MX Series routers that have MPC line cards. The benefit of using an inline service is that you do not need a dedicated services card and there is almost no impact to forwarding capacity or latency. While inline services generally provide better performance than using a services card, their functionality tends to be more basic. For example, inline NAT supports only static NAT.

We are using source static NAT in this inline NAT example.

Types of Inline NAT

There are two types of inline NAT:

  • Interface-style—an interface-based method, where packets arriving at an interface are sent through a service set. Use interface-style NAT to apply NAT to all traffic that traverses an interface.

    Interface-style NAT is simpler to configure than next-hop style NAT.

  • Next-hop-style—a route-based method that is typically used when routing instances forward packets sourced from a specific network or destined to a specific destination through the inline service.

This example shows how to use both methods of inline NAT as follows:

  • PE1 uses next-hop based inline NAT for traffic from the Customer A and Customer B networks to cloud services.

  • PE 2 uses interface-based inline NAT for traffic from the Customer C network to cloud services.

Configuring Next-Hop Style Inline Source NAT on PE1

This section shows how to configure route-based inline NAT using si- interfaces with next-hop style service-sets.

In this example, the Customer A LAN and Customer B LAN have overlapping subnets. The PE1 router differentiates the traffic according to which si- interface the traffic arrives on.

Figure 6: Next-Hop Style Inline NAT Configuration
Next-Hop Style Inline NAT Configuration

The following configuration items are used in this section:

  • Inline service interface—a virtual interface that resides on the Packet Forwarding Engine of the MPC. To access services, traffic flows in and out of the si- (service-inline) interfaces.

  • Service set—defines the service(s) performed, and identifies which inline interface will feed traffic into and out of the service set. This section implements next-hop style service sets, which uses a route-based method, where static routes are used to forward packets destined for a specific destination through the inline service.

  • NAT rule—uses an if-then structure (like firewall filters) to define match conditions and then apply address translation to the matching traffic.

  • NAT pool—a user-defined set of IP addresses that the NAT rule uses for translation.

  • Virtual router (VR) routing instance—includes a default static route that sends customer traffic to the si- interface where NAT is applied.

  • Firewall filters— redirects inbound traffic from Customer A and Customer B to the appropriate VR routing instances.

  • VRF routing instance—the outside si- interfaces are added to the VRF routing instances for Customer A and Customer B so NAT-translated outbound traffic can continue on its intended path across the VPN.

Figure 7 shows the traffic flow through PE1 for inline NAT traffic coming from the Customer A LAN and going to cloud services.

Figure 7: Traffic Flow on PE1 for Next-Hop Style Inline NAT Traffic from Customer A LAN to Cloud Services
Traffic
Flow on PE1 for Next-Hop Style Inline NAT Traffic from Customer A
LAN to Cloud Services

To configure next-hop style inline NAT on PE1:

  1. Enable inline services for the relevant FPC slot and PIC slot, and define the amount of bandwidth to dedicate for inline services.

    The FPC and PIC settings map to the si- interface configured.

  2. Configure the service interfaces used for NAT. The inside interfaces are on the CE side of the network. The outside interfaces are on the core side of the network.

    • For traffic from the customer network to the core:

      The inside interface handles ingress traffic from the customer network. NAT is applied to this traffic, and then the egress traffic is sent out the outside interface.

    • For traffic from the core to the customer network:

      The outside interface handles ingress traffic from the core network. NAT is removed from this traffic, and then the egress traffic is sent out the inside interface.

  3. Configure NAT pools.
  4. Configure NAT rules that:

    • Match traffic from the Customer A and Customer B networks.

    • Apply the pool from which to obtain an address.

    • Apply basic NAT 44, a type of static NAT that applies to IPv4 traffic.

  5. Configure next-hop style service sets. The service sets associate service interfaces and define services. In this case, traffic sent through the defined inside and outside interfaces will have NAT processing applied.
  6. Create VR routing instances for Customer A traffic and Customer B traffic. These routing instances separate Customer A and B traffic. They include default static routes that send traffic to the inside service interfaces and towards the service sets, where NAT can be applied.
  7. Configure firewall filters that redirect incoming traffic from Customer A and Customer B to the VR routing instances.
  8. Apply the firewall filters to the appropriate interfaces.
  9. Add the outside si- interfaces to the previously configured VRF routing instances. This puts the now-NAT-translated outbound traffic back in its intended VRF routing instance, and it can now be sent across the VPN as usual.

Allowing Return Traffic from Cloud Services to Reach Customer LANs

When return traffic from Cloud Services goes through the services interfaces, NAT addressing is removed, and traffic is sent to the VR routing instances. However, there is no route to the customer LANs in the VR routing instances, so we need to add them. To do this, we will share routes from the VRF routing instances into the VR routing instances using RIB groups.

For example, for traffic to the Customer A LAN, we will share routes from the Cust-A-VRF.inet.0 routing table into the Cust-A-VR.inet.0 routing table. Figure 8 shows the traffic flow through PE1 for inline NAT traffic coming from cloud services going to the Customer A LAN.

Figure 8: Traffic Flow on PE1 for Next-Hop Style Inline NAT Traffic from Cloud Services to the Customer A LAN
Traffic
Flow on PE1 for Next-Hop Style Inline NAT Traffic from Cloud Services
to the Customer A LAN

Before we set up route sharing on PE1, there is only a default static route in the Cust-A-VR.inet.0 routing table:

To share routes from the Cust-A-VRF.inet.0 routing table with the Cust-A-VR.inet.0 routing table:

  1. Create policies that match the routes to be shared.

    • Term 1 matches routes that provide reachability back to the customer LANs.

    • Term 2 matches interface routes that provide reachability to the CE devices.

  2. Create RIB groups for route sharing between tables.

    • With the import-rib statement, list the source routing table to be shared followed by the destination routing table into which the routes will be imported.

    • With the import-policy statement, specify the policy used to define the specific routes that will be shared.

  3. In the VRF routing instances created previously, apply the RIB groups to the desired routes.

    • To import directly connected routes, apply the RIB group under the routing-options hierarchy.

    • To import the customer LAN routes (PE1 receives these routes through the CE-to-PE EBGP peerings), apply the RIB group under the protocols hierarchy.

After we set up route sharing, a direct interface route and a BGP route to the customer A LAN have been added to the Cust-A-VR.inet.0 routing table:

Return traffic can now reach the Customer LANs.

Verifying Next-Hop Style Inline NAT

  1. From CE1, verify connectivity between CE1 and cloud services.
  2. From PE1, show NAT statistics to verify that traffic is being NAT-translated.
  3. From PE1, verify the NAT pools being used to translate addresses for Customers A and B.
  4. From cloud services, verify that the router is receiving the pings from PE1s Customer A pool of NAT translated source addresses (192.0.2.0/24).

    Run pings again from CE1 to cloud services, and enter the following command on cloud services.

  5. From PE3, show the BGP Layer 3 routing table. This table verifies that NAT translation is happening. 192.0.2.0 /24 is address Pool A for Customer A traffic. 198.51.100.0 is address Pool B for Customer B traffic

Configuring Interface-Style Inline NAT on PE2

For traffic from Customer C, we are using interface-style inline NAT. This section shows how to configure interface-based inline NAT, which uses a simpler configuration than next-hop style NAT.

Figure 9: Interface-Style NAT Configuration
Interface-Style NAT Configuration

The following configuration items are used in this section:

  • Inline service interface—a virtual interface that resides on the Packet Forwarding Engine of the MPC. To access services, traffic flows in and out of the si- (service-inline) interface.

  • Service set—defines the service(s) performed, and identifies which inline interface(s) will feed traffic into and out of the service set. This section implements interface-style service sets, where packets arriving at an interface are sent through the inline service interface.

  • NAT rule—uses an if-then structure (similar to firewall filters) to define match conditions and then apply address translation to the matching traffic.

  • NAT pool—a user-defined set of IP addresses that the NAT rule uses for translation.

  • VRF routing instance—the si- interface is added to the VRF routing instance for Customer C.

Figure 10 shows the traffic flow on PE2 for traffic sent from the Customer C LAN to cloud services.

Figure 10: Traffic Flow on PE2 for Interface-Style Inline NAT traffic from Customer C LAN to Cloud Services
Traffic
Flow on PE2 for Interface-Style Inline NAT traffic from Customer C
LAN to Cloud Services

Figure 11 shows the traffic flow on PE2 for traffic from cloud services to the Customer C LAN.

Figure 11: Traffic Flow on PE2 for Interface-Style NAT Traffic from Cloud Services to Customer C LAN
Traffic
Flow on PE2 for Interface-Style NAT Traffic from Cloud Services to
Customer C LAN

To configure interface style inline NAT on PE2:

  1. Enable inline services for the relevant FPC slot and PIC slot, and define the amount of bandwidth to dedicate for inline services.

    The FPC and PIC settings map to the si- interface configured previously.

  2. Configure the service interface used for NAT. Interface-style NAT requires only one interface.
  3. Configure a NAT pool.
  4. Configure a NAT rule that:

    • Matches traffic from the Customer C network.

    • Applies the pool from which to obtain an address.

    • Applies basic NAT 44, a type of static NAT that applies to IPv4 traffic.

  5. Configure an interface-style service set that associates the service interface to the NAT service. In this case, the NAT service uses the SRC-NAT-C NAT rule.

    Traffic will flow into and out of the si- interface to access the inline NAT service.

  6. Apply the input and output service set to the xe-0/0/2 interface, which is the interface to Customer C. This configuration specifies that all traffic to and from Customer C is redirected through the service set.
  7. Add the si- interface to the Cust-C VRF routing instance.

With this configuration in place, traffic from Customer C enters the interface at PE2, is redirected through the service interface to the service set for NAT translation. The traffic is then returned to the VRF routing instance and can be sent across the VPN as usual.

Verifying Interface Style Inline NAT

  1. From CE3, verify connectivity between CE3 and cloud services.
  2. From PE2, show the inline NAT statistics to verify that traffic is being NAT-translated.
  3. From PE2, verify that inline NAT is being applied correctly.
  4. From PE2, verify that the NAT translation pool shows in the routing table for the customer C network.
  5. From cloud services, verify that the router is receiving the pings from PE2s pool of NAT translated source addresses (203.0.113.0/24).

    Run pings again from CE3 to cloud services, and enter the following command on cloud services.

Complete Router Configurations

This section has the complete configuration of each router.

PE1 Configuration

PE2 Configuration

PE3 Configuration

CE1 Configuration

CE2 Configuration

CE3 Configuration