Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

NFX250 NextGen Overview

The Juniper Networks NFX250 Network Services Platform is a secure, automated, software-driven customer premises equipment (CPE) platform that delivers virtualized network and security services on demand. The NFX250 is part of the Juniper Cloud CPE solution, which leverages Network Functions Virtualization (NFV). It enables service providers to deploy and chain multiple, secure, and high-performance virtualized network functions (VNFs) on a single device.

Figure 1 shows the NFX250 device.

Figure 1: NFX250 DeviceNFX250 Device

The NFX250 is a complete SD-WAN CPE, which provides secure router functionality and Next-Generation Firewall (NGFW) solution.

NGFW includes security features such as

The NFX250 device is suitable for small to midsize businesses and large multinational or distributed enterprises.

Junos OS Release 19.1R1 introduces a reoptimized architecture for NFX250 devices. This architecture enables you to use JCP as the single point of management to manage all the NFX250 components.

Note:

For documentation purposes, NFX250 devices that use this architecture are referred to as NFX250 NextGen devices.

Software Architecture

Figure 2 illustrates the software architecture of the NFX250 NextGen. The architecture is designed to provide a unified control plane that functions as a single management point. Key components in the NFX250 NextGen software include the JCP, JDM, Layer 2 data plane, Layer 3 data plane, and VNFs.

Figure 2: NFX250 NextGen Software ArchitectureNFX250 NextGen Software Architecture

Key components of the system software include:

  • Linux—The host OS, which functions as the hypervisor.

  • VNF—A VNF is a virtualized implementation of a network device and its functions. In the NFX250 NextGen architecture, Linux functions as the hypervisor, and it creates and runs the VNFs. The VNFs include functions such as firewalls, routers, and WAN accelerators.

    You can connect VNFs together as blocks in a chain to provide networking services.

  • JCP—Junos virtual machine (VM) running on the host OS, Linux. The JCP functions as the single point of management for all the components.

    The JCP supports:

    • Layer 2 to Layer 3 routing services

    • Layer 3 to Layer 4 security services

    • Layer 4 to Layer 7 advanced security services

    In addition, the JCP enables VNF lifecycle management.

  • JDM—An application container that manages VNFs and provides infrastructure services. The JDM functions in the background. Users cannot access the JDM directly.

  • L2 data plane—Manages Layer 2 traffic. The Layer 2 data plane forwards the LAN traffic to the Open vSwitch (OVS) bridge, which acts as the NFV backplane. The Layer 2 data plane is mapped to the virtual FPC0 on the JCP.

  • L3 data plane—Provides data path functions for the Layer 3 to Layer 7 services. The Layer 3 data plane is mapped to the virtual FPC1 on the JCP.

  • Open vSwitch (OVS) bridge—The OVS bridge is a VLAN-aware system bridge that acts as the NFV backplane to which the VNFs, FPC1, and FPC0 connect. Additionally, you can create custom OVS bridges to isolate connectivity between different VNFs.

For the list of supported features, see Feature Explorer.

NFX250 Models

Table 1 lists the NFX250 device models and its specifications. For more information, see the NFX250 Hardware Guide.

Table 1: NFX250 Models and Specifications

Components

NFX250-S1

NFX250-S2

NFX250-S1E

CPU

2.0 GHz 6-core Intel CPU

2.0 GHz 6-core Intel CPU

2.0 GHz 6-core Intel CPU

RAM

16 GB

32 GB

16 GB

Storage

100 GB SSD

400 GB SSD

200 GB SSD

Form Factor

Desktop

Desktop

Desktop

Ports

Eight 10/100/ 1000BASE-T RJ-45 access ports

Eight 10/100/ 1000BASE-T RJ-45 access ports

Eight 10/100/ 1000BASE-T RJ-45 access ports

Two 10/100/ 1000BASE-T RJ-45 ports which can be used as access ports or uplink ports

Two 10/100/ 1000BASE-T RJ-45 ports which can be used as access ports or uplink ports

Two 10/100/ 1000BASE-T RJ-45 ports which can be used as access ports or uplink ports

Two 100/1000BASE-X SFP ports which can be used as uplinks

Two 100/1000BASE-X SFP ports which can be used as uplinks

Two 100/1000BASE-X SFP ports which can be used as uplinks

Two 1-Gigabit or 10-Gigabit Ethernet SFP+ uplink ports

Two 1-Gigabit or 10-Gigabit Ethernet SFP+ uplink ports

Two 1-Gigabit or 10-Gigabit Ethernet SFP+ uplink ports

One 10/100/ 1000BASE-T RJ-45 management port

One 10/100/ 1000BASE-T RJ-45 management port

One 10/100/ 1000BASE-T RJ-45 management port

Console ports (RJ-45 and mini-USB)

Console ports (RJ-45 and mini-USB)

Console ports (RJ-45 and mini-USB)

One USB 2.0 port

One USB 2.0 port

One USB 2.0 port

Interfaces

The NFX250 NextGen device includes the following network interfaces:

  • Ten 1-Gigabit Ethernet RJ-45 ports and two 1-Gigabit Ethernet network ports that support small form-factor pluggable (SFP) transceivers. The ports follow the naming convention, ge-0/0/n, where n ranges from 0 to 11. These ports are used for LAN connectivity.

  • Two 1-Gigabit or 10-Gigabit uplink ports that support small form-factor pluggable plus (SFP+) transceivers. The ports follow the naming convention xe-0/0/n, where the value of n is either 12 or 13. These ports are used as WAN uplink ports.

  • A dedicated management port labeled MGMT (fxp0) functions as the out-of-band management interface. The fxp0 interface is assigned the IP address 192.168.1.1/24.

  • Two static interfaces, sxe-0/0/0 and sxe-0/0/1, which connect the Layer 2 data plane (FPC0) to the OVS backplane.

Note:

By default, all the network ports connect to the Layer 2 data plane.

Note:

The NFX250 NextGen devices do not support integrated routing and bridging (IRB) interfaces. The IRB functionality is provided by ge-1/0/0, which is always mapped to the service chaining backplane (OVS). Note that this mapping cannot be changed.

For the list of supported transceivers for your device, see https://apps.juniper.net/hct/product/#prd=NFX250.

Performance Modes

NFX250 NextGen devices offer various operational modes. You can either select the operational mode of the device from a pre-defined list of modes or specify a custom mode.

  • Throughput mode—Provides maximum resources (CPU and memory) for Junos software.

  • Hybrid mode—Provides a balanced distribution of resources between the Junos software and third-party VNFs.

  • Compute mode—Provides minimal resources for Junos software and maximum resources for third-party VNFs.

  • Custom mode—Provides an option to allocate resources to Layer 3 data plane and NFV backplane.

Note:

Compute, hybrid, and throughput modes are supported in Junos OS Release 19.2R1 or later. Custom mode is supported in Junos OS Release 21.1R1 or later.

The default mode is throughput in Junos OS Releases prior to 21.4R1. Starting in Junos OS Release 21.4R1, the default mode is compute.

In throughput mode, you must map SR-IOV VF to Layer 3 data plane interfaces on an NFX250 NextGen device. Three SR-IOV (VFs) are reserved from each NIC (SXE or HSXE) to support a maximum of six Layer 3 data plane interfaces. For example:

Note:

You cannot create VNFs on Throughput mode.

Note:

Starting in Junos OS Release 21.1R1, mapping OVS to Layer 3 data plane interface is not supported in throughput mode on NFX250 NextGen devices. If the OVS mapping is present in releases prior to Junos OS Release 21.1R1, you must change the mapping before upgrading the device to Junos OS Release 21.1R1 to prevent configuration commit failure.

In hybrid, compute, and throughput modes, you can map Layer 3 data plane interfaces to either SR-IOV or OVS on an NFX250 NextGen device. For example:

Map Layer 3 data plane interfaces to either SR-IOV:

Map Layer 3 data plane interfaces to either OVS:

Note:

Starting in Junos OS Release 21.1R1, when your device is in throughput mode, you can map the Layer 3 data plane interfaces only to SR-IOV VFs. When your device is in compute or hybrid modes, you can map the Layer 3 data plane interfaces to either SR-IOV VFs or OVS.

In hybrid or compute mode, you can create VNFs using the available CPUs on each mode. You can check the CPU availability by using the show vmhost mode command. Each VNF can have maximum user interfaces apart from the two management interfaces. You can attach the VNF interfaces to either OVS or SR-IOV interfaces.

Note:

You cannot attach single VNF interface to both SR-IOV and OVS. However, you can attach different interfaces from the same VNF to SR-IOV and OVS.

Seven SR-IOV (VFs) are reserved from each NIC (SXE or HSXE) to create VNF interfaces, and supports up to a maximum of 28 SR-IOV VNF interfaces per device. You can view the available free VFs by using the show system visibility network.

Note:

When the mapping to a particular Layer 3 data plane interface changes between SR-IOV NICs (eg, hsxe0 to hsxe1) or from hsxex to OVS or vice versa, then FPC1 restarts automatically.

To change the current mode, run the request vmhost mode mode-name command. The request vmhost mode ? command lists only the pre-defined modes such as hybrid, compute, and throughput modes.

Before switching to a mode, issue the show system visibility cpu and show vmhost mode commands to check the availability of CPUs. When switching between operational modes, ensure that resource and configuration conflicts do not occur.

For example, if you move from compute mode that supports VNFs to throughput mode that does not support VNFs, conflicts occur:

If the Layer 3 data plane is not mapped to SR-IOV, then switching from hybrid or compute mode to throughput mode results in an error.

You can define a custom mode template in Junos configuration by using the following commands:

  1. user@host# set vmhost mode custom custom-mode-name layer-3-infrastructure cpu count count
  2. user@host# set vmhost mode custom custom-mode-name layer-3-infrastructure memory size mem-size
  3. user@host# set vmhost mode custom custom-mode-name nfv-back-plane cpu count count
  4. user@host# set vmhost mode custom custom-mode-name nfv-back-plane memory size mem-size

Starting in Junos OS Release 22.1R1, you can opt to configure the CPU quota for the Layer 3 data plane by using the set vmhost mode custom custom-mode-name layer-3-infrastructure cpu colocation quota quota-value command, where quota-value can range from 1 through 99. If you configure cpu colocation quota, then the sum total of the CPU quotas of the cpu colocation components must be less than or equal to 100. You must configure cpu count using numeric values and not keywords like MIN as MIN can have different values for different components.

The number of CPUs and the specific CPUs (by CPU ID) available for VNF usage in a custom mode is automatically determined based on the cpu count and cpu colocation quota in the custom mode configuration and the internally fixed CPU allocation for other Juniper system components.

The amount of memory, in terms of 1G units, available for VNF usage in a custom mode is automatically determined based on the custom mode specific memory size configuration and the per-SKU internally fixed memory allocation for other Juniper system components. Note that this number is only an approximate value and the actual maximum memory allocation for VNFs might be less than that.

If you do not configure the memory size for a VNF, then the memory is considered as 1G (default value).

CPU count for both NFV backplane and Layer 3 data plane must be configured in integral numbers.

Memory for Layer 3 data plane and NFV backplane must be specified in Gigabytes in a custom mode. The memory specified through a custom mode is created and backed by 1G huge pages for NFV backplane usage and 2M huge pages for Layer 3 data plane usage. It is recommended to configure NFV backplane memory size in integral numbers, whereas Layer 3 data plane memory can be configured in decimals.

You must configure the CPU count and memory for both Layer 3 data plane and NFV backplane. The CPU and memory resources for the remaining Junos software infrastructure is internally determined by the device.

Custom mode template supports a keyword MIN, which is a device-specific pre-defined value for allocating minimal resources.

flex and perf are the custom mode templates that are present in the default Junos configuration.

  • flex mode—Uses MIN keyword for allocating resources to system components such as Layer 3 data plane and NFV backplane. In this mode, device provides maximum memory and CPUs to third-party VNFs.

    To allocate resources in flex mode:

    1. user@host# set vmhost mode custom custom-mode-name layer-3-infrastructure cpu count MIN
    2. user@host# set vmhost mode custom custom-mode-name layer-3-infrastructure memory size MIN
    3. user@host# set vmhost mode custom custom-mode-name nfv-back-plane cpu count MIN
    4. user@host# set vmhost mode custom custom-mode-name nfv-back-plane memory size MIN

    In flex mode, you can configure a maximum of:

    • 8 IPSec VPN tunnels

    • 16 IFL

    • 4 IFD

  • perf mode—Another example custom mode template that is available in the default Junos configuration.
Note:

Currently, Layer 3 data plane supports only MIN in a custom mode for both CPU count and memory size.

When the device is in custom mode with MIN keyword, only basic firewall features are supported and you can use Layer 3 data plane only for IPsec termination.

When you allocate CPUs to NFV backplane and Layer 3 data plane, the device allocates full cores. When a full core is allocated to NFV backplane, both the logical CPUs on that hyper-threaded core are allocated to it. However, to get the optimal performance, the device disables one of the logical CPUs and is still counted as 2 CPUs allocated. When full cores are not available, the device allocates individual CPUs from different cores.

While allocating CPUs for VNF usage, the device allocates full cores. Both the logical CPUs on that core are enabled. When full cores are not available, the device allocates individual CPUs from different cores.

Note:

The requested CPU count and memory should not exceed the total CPU count and memory available on the system.

When the device is operating in custom mode, you can make changes to the custom mode configuration. Reboot the device for the changes to take effect.

Commit checks are performed for basic validation when a custom mode is defined in the configuration and when you change the device mode to a custom mode.

You cannot delete a custom mode configuration when the device is operating in the same mode.

To delete a custom mode configuration when the device is operating in custom mode:

  1. Change the device mode from custom mode to another mode.

  2. Delete the custom mode configuration.

When the device in a custom mode is downgraded to an image that does not support custom mode, then the default throughput mode is applied on the device.

Note:

Before performing such an image downgrade process, you must remove all VNF configurations from the device.

When multiple custom modes are configured in the device and when the device is in a custom mode other than the flex or perf custom mode, which are defined in the factory-default Junos configuration, you cannot reset the device configuration to factory-default configuration. Before you reset such a device to factory-default Junos configuration, you must change the device mode to one of the pre-defined modes such as compute, hybrid, throughput, or to the flex or perf custom mode that are already defined in the factory-default configuration.

Core to CPU Mapping on NFX250

The following tables list the CPU to core mappings for the NFX250 models:

NFX250-LS1
Core 0 1 2 3
CPU 0, 4 1, 5 2, 6 3, 7
NFX250-S1 and NFX250-S2
Core 0 1 2 3 4 5
CPU 0, 6 1, 7 2, 8 3, 9 4, 10 5, 11

Benefits and Uses

The NFX250 NextGen provides the following benefits:

  • Highly scalable architecture that supports multiple Juniper VNFs and third-party VNFs on a single device. The modular software architecture provides high performance and scalability for routing, switching, and security enhanced by carrier-class reliability.

  • Integrated security, routing, and switching functionality in a single control plane simplifies management and deployment.

  • A variety of flexible deployments. A distributed services deployment model ensures high availability, performance, and compliance. The device provides an open framework that supports industry standards, protocols, and seamless API integration.

  • Secure boot feature safeguards device credentials, automatically authenticates system integrity, verifies system configuration, and enhances overall platform security.

  • Automated configuration eliminates complex device setup and delivers a plug-and-play experience.

Junos OS Releases Supported on NFX Series Hardware

The Table 2 provides details of Junos OS software releases supported on the NFX Series devices.

Note:

Support for Linux bridge mode on NFX250 devices ended in Junos OS Release 18.4.

Note:

Support for nfx-2 software architecture on NFX250 devices ended in Junos OS Release 19.1R1.

Table 2: Supported Junos OS Releases on NFX Series Devices

NFX Series Platform

Supported Junos OS Release

Software Package

Software Downloads Page

NFX150

18.1R1 or later

nfx-3

jinstall-host-nfx-3-x86-64-<release-number>- secure-signed.tgz

install-media-host-usb-nfx-3-x86-64-<release-number>- secure.img

NFX150 Software Download Page

NFX250

15.1X53-D45, 15.1X53-D47, 15.1X53-D470, and 15.1X53-D471

nfx-2

jinstall-host-nfx-2-flex-x86-64-<release-number >-secure-signed.tgz

install-media-host-usb-nfx-2-flex-x86-64-<release-number>- secure.img

NFX250 Software Download Page

17.2R1 through 19.1R1

19.1 R1 or later

nfx-3

jinstall-host-nfx-3-x86-64-<release-number>-secure-signed.tgz

install-media-host-usb-nfx-3-x86-64-<release-number>-secure.img

NFX250 Software Download Page

NFX350

19.4 R1 or later

nfx-3

jinstall-host-nfx-3-x86-64-<release-number>-secure-signed.tgz

install-media-host-usb-nfx-3-x86-64-<release-number>-secure.img

NFX350 Software Download Page