Configure Service Chaining With PNF
This section shows how to create Layer 3 PNF service chains for inter-LR traffic.
Service Chaining Using a PNF
Service Chaining provides security control and enforcement through a physical firewall on traffic between virtual networks that are attached to logical routers. By default, virtual networks attached to the same logical router can communicate only using Layer 3 routing. Virtual networks connected to different logical routers cannot communicate. Service chaining using a firewall (a physical network function) between the logical routers allows virtual networks on separate logical routers to communicate and to communicate in a secure way.
For CEM service chaining you insert a physical network function (PNF) device between two logical routers on a border spine or border leaf device. The PNF device allows for Layer 3 communication between the logical routers. Only Juniper SRX Services Gateways are supported as PNF devices.
Figure 1 shows a logical view of service chaining. VLANs provide connectivity between the logical routers and the PNF device. EBGP advertises routes between the logical routers and the PNF.

In a topology that uses border leaf devices, attach the PNF to the border leaf devices as shown in Figure 2.

Service Chaining Configuration Overview
In this example we are configuring service chaining In the following topology to provide inter-LR routing on the blue and green networks as shown in Figure 3. SRX4k and SRX5k Services Gateways are supported as PNFs. Our PNF is an SRX5400 Services Gateway.

The virtual networks show in Figure 3 have already been configured. See Configure Virtual Networks for Multi-tenant Service Operations. We are adding PNF service chaining so that devices on each network can communicate with each other as shown with the purple line.
To configure service chaining for inter-LR traffic:
- Onboard an SRX device as the PNF device connected to an existing fabric—you can connect the PNF device to a border spine or a border leaf.
- Assign PNF service chaining device role to the PNF device and to the border spines or border leaf devices that connect to the PNF device.
- Connect the PNF to the fabric using a PNF service template.
- Connect the right and left LRs by configuring VLANs and EBGP peering between the PNF and the LRs using a PNF service instance.
Onboard an SRX Services Gateway as the PNF Device
This section shows how to use Contrail Command to integrate an SRX device into our data center fabric to serve as a PNF device.
This configuration assumes that you have already created your fabric. You must use the Brownfield Wizard to onboard the PNF device. You can’t onboard a PNF using the Greenfield Wizard.
SRX clusters are not supported for PNF service chaining.
To selectively onboard an SRX Series router as a PNF device onto an existing fabric:
- Select Fabrics, and then select the fabric
to which you want to add the SRX Series gateway.
- Select Action > Brownfield Wizard.
- On the Create Fabric screen, configure the Management
subnet, then select Additional Configuration, and enter the PNF ServiceChain
subnets.
In this example, we are assigning 10.102.70.215/32 as the Management subnet and 10.200.0.0/24 as the PNF ServiceChain subnet. The Management subnet is used to search for the device. The PNF ServiceChain subnet is used to establish the EBGP session between the PNF device and the spine.
Name
DC1
Overlay ASN
64532
Node Profile
Select the SRX device
VLAN-ID Fabric-Wide Significance
Check box
Management subnets
(used to search for the device)
10.102.70.215
PNF ServiceChain subnets (subnet used to establish EBGP session between the PNF device and the spine.
10.200.0.0/24
- Click Next, and then click Finish.
Assign Device Roles for Spines or Border Leaf Devices
In this procedure we will assign the PNF service chaining role for the spine or border leaf devices that connect to the PNF.
To assign roles:
- On the Fabric Devices summary screen, select Action>Reconfigure Roles.
- Next to the spine or border leaf devices, select Assign Roles.
- Assign the PNF-Servicechain role to the SRX device and click Assign.
Create a PNF Service Template
The service template provides Contrail Command with information about how the PNF device attaches to the spine or border leaf device. In our example, we are using the following interface numbers:

To create a PNF service template:
- Click Infrastructure>Services>Catalog.
The VNF Service Instances page is displayed.
- Click the PNF tab.
The Create PNF Service Template page is displayed.
- Click Create.
- Select Instance (with Template) from the list
that appears.
The Create PNF Service Instance page is displayed.
- Enter the following information in the PNF Service Template
pane and click Create.
Field
Value
Name
PNF
PNF Device
DC2-PNF
PNF Left Interface
xe-2/2/1
PNF Left Fabric
DC2
PNF Left Attachment Points (Attachment points on the spine or border leaf.)
Specify how the spine or border leaf device attaches to the left interface of the PNF device:
Physical Router—DC2-Spine1
Left Interface—xe-0/0/34:0
PNF Right Interface
xe-2/2/2
PNF Right Fabric
DC2
PNF Right Attachment Points (Attachment points on the spine or border leaf.)
Specify how the spine or border leaf device attaches to the right interface of the PNF device:
Physical Router—DC2-Spine2
Right Interface—xe-0/0/34:0
Create a PNF Service Instance
A PNF service instance defines how logical routers are interconnected and how BGP reachability is exchanged between the PNF and the logical routers. The configuration includes VLANs that arne created between the PNF and the logical routers along with EBGP peering.

To create a PNF Service Instance:
- Navigate to Services > Deployments, and select the PNF tab.
The PNF Service Instances screen displays.
- Select Create Instance.
Configure the screen as shown here:
- Enter the following information in the PNF Service Template
pane and click Create.
Field
Value
Name
Spine-to-PNF
Service Template
PNF template
PNF eBGP ASN
65231
Left Tenant Logical Router
LR1
PNF Left BGP Peer ASN
64512
Left Service VLAN (VLAN between the PNF and the LR1)
201
Right Tenant Logical Router
LR2
Right BGP Peer ASN
64512
Right Service VLAN (VLAN between the PNF and LR2)
202
Verify Service Chaining
- To check that inter-LR traffic is working, ping from a
server on one virtual network to a server on another virtual network.
Here we are running ping from BMS1 to BMS4.
BMS1# ping 10.2.4.101 -c 5 PING 10.2.4.101 (10.2.4.101) 56(84) bytes of data. 64 bytes from 10.2.4.101: icmp_seq=1 ttl=61 time=1.23 ms 64 bytes from 10.2.4.101: icmp_seq=2 ttl=61 time=0.909 ms 64 bytes from 10.2.4.101: icmp_seq=3 ttl=61 time=1.06 ms 64 bytes from 10.2.4.101: icmp_seq=4 ttl=61 time=0.994 ms 64 bytes from 10.2.4.101: icmp_seq=5 ttl=61 time=1.04 ms --- 10.2.4.101 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4003ms rtt min/avg/max/mdev = 0.909/1.049/1.233/0.114 ms
- Run traceroute from BMS1 to BMS4.
traceroute 10.2.4.101 traceroute to 10.2.4.101 (10.2.4.101), 30 hops max, 60 byte packets 1 10.2.1.5 (10.2.1.5) 0.890 ms 1.291 ms 1.530 ms 2 10.200.0.6 (10.200.0.6) 0.474 ms 0.574 ms 0.449 ms 3 10.200.0.13 (10.200.0.13) 1.651 ms 1.339 ms 1.638 ms 4 10.2.4.101 (10.2.4.101) 0.895 ms !X 0.863 ms !X 0.860 ms !X [root@ix-centos-s1 ~]#
- On the PNF, show that BGP peers are up.
host@DC2-PNF> show bgp summary Threading mode: BGP I/O Groups: 2 Peers: 2 Down peers: 0 Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped... 10.200.0.5 65200 25580 26092 0 0 1w1d 4:27:30 Establ Spine-to-PNF_left_right.inet.0: 2/3/3/0 10.200.0.13 65200 25579 26091 0 0 1w1d 4:27:30 Establ Spine-to-PNF_left_right.inet.0: 2/3/3/0
- Show the BGP routing information being advertised to 10.200.0.5
and 10.200.0.13.
user@DC2-PNF> show route advertising-protocol bgp 10.200.0.5 Spine-to-PNF_left_right.inet.0: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden) Prefix Nexthop MED Lclpref AS path * 0.0.0.0/0 Self I * 10.0.2.252/32 Self I user@DC2-PNF> show route advertising-protocol bgp 10.200.0.13 Spine-to-PNF_left_right.inet.0: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden) Prefix Nexthop MED Lclpref AS path * 0.0.0.0/0 Self I * 10.0.2.252/32 Self I
- Display routing information received from 10.200.0.5 and
10.200.0.13
root@DC2-PNF> show route receive-protocol bgp 10.200.0.5 inet.0: 5 destinations, 5 routes (4 active, 0 holddown, 1 hidden) Spine-to-PNF_left_right.inet.0: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden) Prefix Nexthop MED Lclpref AS path * 10.2.1.0/24 10.200.0.5 65200 I * 10.2.3.0/24 10.200.0.5 65200 I 10.200.0.0/29 10.200.0.5 65200 I Spine-to-PNF_left_right.inet.1: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden) inet6.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden) Spine-to-PNF_left_right.inet6.0: 2 destinations, 3 routes (2 active, 0 holddown, 0 hidden) Spine-to-PNF_left_right.inet6.1: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden) root@DC2-PNF> show route receive-protocol bgp 10.200.0.13 inet.0: 5 destinations, 5 routes (4 active, 0 holddown, 1 hidden) Spine-to-PNF_left_right.inet.0: 13 destinations, 15 routes (13 active, 0 holddown, 0 hidden) Prefix Nexthop MED Lclpref AS path * 10.2.2.0/24 10.200.0.13 65200 I * 10.2.4.0/24 10.200.0.13 65200 I 10.200.0.8/29 10.200.0.13 65200 I Spine-to-PNF_left_right.inet.1: 3 destinations, 3 routes (3 active, 0 holddown, 0 hidden) inet6.0: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden) Spine-to-PNF_left_right.inet6.0: 2 destinations, 3 routes (2 active, 0 holddown, 0 hidden) Spine-to-PNF_left_right.inet6.1: 1 destinations, 1 routes (1 active, 0 holddown, 0 hidden)