Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Overview of Solution Architectures

 

Architecture of the Contrail Cloud Implementation in the Centralized Deployment

This section describes the architecture of the components in the Contrail Cloud implementation used in the centralized deployment.

Architecture of the Contrail Cloud Implementation

The centralized deployment uses the Contrail Cloud implementation to support the service provider’s cloud in a network point of presence (POP). The Contrail Cloud implementation consists of the hardware platforms, including the servers, and Contrail OpenStack software. Figure 1 illustrates the Contrail Cloud implementation. The Contrail Service Orchestration (CSO) software is installed on one or more servers in the Contrail Cloud implementation to complete the deployment.

Figure 1: Architecture of Contrail Cloud Implementation
Architecture of Contrail Cloud
Implementation

In the Cloud CPE Centralized Deployment Model:

  • The MX Series router provides the gateway to the service provider’s cloud.

  • The EX Series switch provides Ethernet management and Intelligent Platform Management Interface (IPMI) access for all components of the Cloud CPE Centralized Deployment Model. Two interfaces on each server connect to this switch.

  • The QFX Series switch provides data access to all servers.

  • The number of servers depends on the scale of the deployment and the high availability configuration. You must use at least two servers and you can use up to five servers.

  • Each server supports four nodes. The function of the nodes depends on the high availability configuration and the type of POP.

Architecture of the Servers

The configuration of the nodes depends on whether the Contrail Cloud implementation is in a regional POP or central POP and on the high availability configuration. Each node is one of the following types:

  • Contrail Service Orchestration node, which hosts the Contrail Service Orchestration software.

  • Contrail controller node, which hosts the Contrail controller and Contrail Analytics.

  • Contrail compute node, which hosts the Contrail Openstack software and the virtualized network functions (VNFs).

The Contrail Cloud implementation in a central POP contains all three types of node. Figure 2 shows the configuration of the nodes in the Contrail Cloud implementation in the central POP for a deployment that offers neither Contrail nor Contrail Service Orchestration high availability:

  • Server 1 supports one Contrail controller node, two Contrail compute nodes, and one Contrail Service Orchestration node.

  • Server 2 and optional servers 3 through 5 each support four Contrail compute nodes.

Figure 2: Architecture of Servers in the Central POP for a Non-Redundant Installation
Architecture of Servers
in the Central POP for a Non-Redundant Installation

Figure 3 shows the configuration of the nodes in the Contrail Cloud implementation in the central POP for a deployment that offers both Contrail and Contrail Service Orchestration high availability:

  • Servers 1, 2, and 3 each support one Contrail controller node for Contrail redundancy.

  • Servers 1 and 2 each support one Contrail Service Orchestration node for Contrail Service Orchestration redundancy.

  • Other nodes on servers 1, 2, and 3 are Contrail compute nodes. Optional servers 4 through 7 also support Contrail compute nodes.

Figure 3: Architecture of Servers in the Central POP for a Redundant Installation
Architecture of Servers in the
Central POP for a Redundant Installation

The Contrail Cloud implementation in a regional POP contains only Contrail nodes and not Contrail Service Orchestration nodes. In a deployment that does not offer Contrail high availability, the regional Contrail Cloud implementations support:

  • One Contrail controller node and three Contrail compute nodes on server 1.

  • Four Contrail compute nodes on server 2 and on optional servers 3 through 5.

In a deployment that offers Contrail high availability, the regional Contrail Cloud implementations support:

  • One Contrail controller node for Contrail redundancy on servers 1, 2, and 3.

  • Three Contrail compute nodes on servers 1, 2, and 3.

  • Four Contrail compute nodes on optional servers 4 through 7.

Architecture of the Contrail Nodes

Each Contrail controller node uses Contrail vRouter over Ubuntu and kernel-based virtual machine (KVM) as a forwarding plane in the Linux kernel. Use of vRouter on the compute node separates the deployment’s forwarding plane from the control plane, which is the SDN Controller in Contrail OpenStack on the controller node. This separation leads to uninterrupted performance and enables scaling of the deployment. Figure 4 shows the logical representation of the Contrail controller nodes.

Figure 4: Logical Representation of Contrail Controller Nodes
Logical Representation
of Contrail Controller Nodes

A Contrail compute node hosts Contrail OpenStack, and the VNFs. Contrail OpenStack resides on the physical server and cannot be deployed in a VM. Each VNF resides in its own VM. Figure 5 shows the logical representation of the Contrail compute nodes.

Figure 5: Logical Representation of Contrail Compute Nodes
Logical Representation
of Contrail Compute Nodes

Architecture of the Hybrid WAN or Distributed CPE Deployment

In the distributed CPE deployment the Contrail Services Orchestration (CSO) software resides in the service provider’s cloud, and is operated by the service provider in order to provide network services at customer sites.

Figure 6 shows a simple diagram of the distributed CPE solution. The cloud represents the service provider network to which the customer site is connected.

Figure 6: Distributed CPE
Distributed CPE

As mentioned previously, the distributed Cloud CPE deployment makes use of on-premises CPE devices in order to localize the delivery of network services and provide gateway router (GWR) functionality. In this case, the Juniper Networks NFX Series or SRX Series devices act as the CPE devices. In the case of NFX as CPE, the GWR function is provided by a built-in vSRX VNF and network services are hosted and provided from within the NFX that is located at the customer site. This makes the network services extremely responsive from the point of view of the customer LAN, while negating the need for customer traffic to traverse the WAN in order to access the services. In the case of an SRX Series device as the managed CPE device, only services native to the SRX, firewall, NAT, and UTM, can be provisioned and managed at the customer site by CSO. Other services, such as WAN optimization must be provisioned and managed separately from the SRX and cannot be managed by CSO.

The distributed Cloud CPE deployment also makes use of a provider edge (PE) router in the service provider cloud. The PE router acts as a IPSec concentrator, terminating IPSec tunnels, and a PE router, providing policy-based access to the service provider’s MPLS network. The PE and CPE devices communicate over one or more WAN links and make use of MPLS/GRE or IPSec tunnels.

Selection of services, and some service management capabilities can be allocated to the customer by the service provider using the CSO Administrator Portal. The customer would then access whatever service selection and management capabilities allowed by using the Customer Portal.

CSO manages the lifecycle of the VNFs hosted on the NFX CPE devices from creation in Network Designer, through instantiation, deployment, and finally through replacement or retirement.

Architecture of the SD-WAN Deployment

While the Cloud CPE deployments focus on network service delivery to customer sites, the SD-WAN deployment differs in that its primary goal is cost effective, efficient, and secure transfer of data from site to site, through the cloud, over multiple connections. At its most basic, SD-WAN includes multiple sites, multiple connections between sites, and a controller as shown in figure

The SD-WAN solution makes use of existing branch and WAN connection types (underlay networks) combined with on-premises CPE devices and hubs, service provider cloud-based hubs and routers, and overlay networking to provide network flexibility, traffic management, and cost effective routing across whichever connection makes the most sense.

At a customer site, there are often separate MPLS and multiple Internet connections over various transports and ISPs. The SD-WAN solution allows you to create software-defined overlay networks that take advantage of the differences in these connection types. For example, business-critical applications can be routed through L3 VPN tunnels over secure MPLS connections that typically include service level agreements (SLAs), while non-critical applications can be routed through IPSec tunnels overlaid on various Internet connection types. Figure 7 shows a simplified underlay network without the overlays.

Figure 7: SD-WAN Underlay Network
SD-WAN Underlay Network

Traffic is routed across one link as a primary link while other links remain as backups in case of failure, degradation, or congestion. The SD-WAN solution can monitor traffic across all the link types and automatically re-route traffic that is in danger of missing an SLA. Primary links are designated for a given type of traffic or specific application.

Starting with Release 4.0.0, CSO supports multiple broadband connection types including LTE, ADSL and VDSL. LTE can now be used as a primary link and function as DATA, OAM, or DATA_AND_OAM link. An LTE link can also be used for zero-touch provisioning (ZTP) of the devices. For LTE support, the NFX150 has support built-in while the NFX250 uses a USB dongle for LTE connectivity. Both the NFX150 and NFX 250 Series use SFP connectors to support ADSL or VDSL. ADSL and VDSL cannot be used for ZTP.

For redundancy and resiliency, the SD-WAN solution supports multi-homed hubs in the service provider cloud and dual CPE in on-premises sites. In both instances, one device (hub or CPE) is the primary device while the second device is a backup device. The backup devices remain idle as long as the primary devices are working.

Network services like statefull firewall, unified threat management, and WAN optimization can be created as VNFs and deployed as service chains in whatever order needed on any of the links at each customer site.

Note

When you use an NFX Series device as CPE in an SD-WAN deployment, you cannot use a WAN optimization VNF.

The SD-WAN deployment also supports two distinct topologies: full-mesh and hub-and-spoke. In the hub-and-spoke topology, customer sites can communicate with one another, provided the proper policies are in place, by going through the hub device at the service provider cloud. In the full-mesh topology, all CPE devices are spoke devices. Starting with CSO Release 4.0.0, a cloud hub device is required in the full mesh topology in order to support secure OAM.

Release History Table
Release
Description
Starting with Release 4.0.0, CSO supports multiple broadband connection types including LTE, ADSL and VDSL. LTE can now be used as a primary link and function as DATA, OAM, or DATA_AND_OAM link. An LTE link can also be used for zero-touch provisioning (ZTP) of the devices. For LTE support, the NFX150 has support built-in while the NFX250 uses a USB dongle for LTE connectivity. Both the NFX150 and NFX 250 Series use SFP connectors to support ADSL or VDSL. ADSL and VDSL cannot be used for ZTP.
Starting with CSO Release 4.0.0, a cloud hub device is required in the full mesh topology in order to support secure OAM.