Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Related Documentation

    New and Changed Features

    • Support for customizing applications and SLA policies for on-premise sites—You can now use Customer Portal to customize applications and service level agreement policies for on-premise sites.
      • To view the applications on a SD-WAN links in the network, select Monitor > Overview to view the geographical map that displays PoPs, sites, and connections. Click an SD-WAN link on the map to view details about the applications on the link.
      • To view summary statistics for the applications on an SD-WAN link, select Tenants > Tenant Name > Tenant Applications.
      • To view more detailed statistics about the applications on an SD-WAN link, select Tenants > Tenant Name > Site Name .
    • Monitoring applications on SD-WAN links—You can now use Administration Portal to monitor information about applications on SD-WAN links and on-premise sites in an SD-WAN configuration.
      • To view the applications on a SD-WAN links in the network, select Monitor > Overview to view the geographical map that displays PoPs, sites, and connections. Click an SD-WAN link on the map to view details about the applications on the link.
      • To view summary statistics for the applications on an SD-WAN link, select Tenants > Tenant Name > Tenant Applications.
      • To view more detailed statistics about the applications on an SD-WAN link, select Tenants > Tenant Name > Site Name .
    • Support for Riverbed Steelhead VNF on NFX250 devices—You can now use the Riverbed Steelhead application as a VNF for WAN optimization on an NFX250 device.
    • New port number for Administration Portal—The port number for Administration Portal has changed to 443 from 81. When you access Administration Portal, you use the URL http://central-IP-Address, where central-IP-Address is the IP address of the VM that hosts the microservices for the central POP. For example: http://192.0.2.1
    • Support for 2000 NFX250 devices for a Contrail Service Orchestration installation—Each Contrail Service Orchestration installation can support up to 2000 NFX250 devices, with one NFX250 device at a site.
    • Support for Microservices High Availability for Distributed Deployments—Microservices high availability (HA) is now supported for distributed deployments. Previously, microservices HA was supported only for centralized deployments.
    • Enhanced onboarding of VNFs and design capabilities for network services—The Designer Tools suite now includes three components for creating network services based on Juniper Networks and third-party VNFs:
      • Configuration Designer, which you use to create configuration templates that determine how VNFs are implemented in a deployment.
      • Resource Designer, which you use to create VNF packages that specify the network functions, function chains, performance, and a configuration template.
      • Network Service Designer, which you use to create network services packages based on VNF packages. This component was available in previous releases.
    • Addition of license cost as a performance goal in Network Service Designer—You can now specify the cost of a VNF license as a performance goal when you create a network service with the Designer Tools.
    • Support for enabling VNF recovery—You can now specify whether to enable automatic recovery for VNFs in a network service instance in a centralized deployment. You can enable this feature through Administration Portal or the API. In previous releases, automatic recovery was permanently enabled for all VNFs. Disabling automatic recovery for a VNF allows you to quickly investigate a network problem or a problem with the VNF itself. Enabling automatic recovery increases resiliency and automaticity of the implementation.
    • Support for NFX250-LS1 Model—You can now deliver network services on an NFX250-LS1 Network Services Platform in addition to the NFX250-S1 and NFx250-S2 models supported previously.
    • Streamlined activation process for CPE devices—You can now activate NFX250 and SRX CPE devices in Customer Portal. NFX250 devices require a code for activation; however, SRX Series devices do not.

      Previously, you activated NFX250 devices by entering the activation code through the NFX console and SRX Series devices by copying the configuration from Customer Portal and pasting it into the SRX Series console.

    • Support for port-forwarding on NFX250 devices—Port-forwarding is now enabled for all NFX250 device templates. Port-forwarding enables Contrail Service Orchestration to manage an NFX250 device through a single IP address
      • The NFX_deployment_option_4 and NFX_Basic_SDWAN_CPE templates offer device-initiated connections (outbound SSH) with port-forwarding capability.
      • The NFX_deployment_option_1 template offers port-forwarding with the connection initiated by Contrail Service Orchestration.
      • The NFX_Internet_Managed _CPE template uses IP connectivity without IPsec.
    • Support for configuring device templates in Administration Portal—You can now create additional device templates and modify existing device templates from the Administration Portal. Previously, you could use only the device templates installed with Contrail Service Orchestration and you could not modify them. Log in to Administration Portal and click Resources > Device Templates to access the following options:
      • Clone—Clone an existing device template with your preferred configuration and customize it as needed.
      • Edit—Modify the routing configuration and LAN configuration for an existing device template.
      • Import—Import a new device template in JSON format from your local machine.

    Node Servers and Servers Tested in the Cloud CPE Solution

    The Cloud CPE solution uses commercial off-the-shelf (COTS) node servers or servers for both the centralized and distributed deployments for the following functions:

    • Contrail Service Orchestration central and regional servers
    • Contrail Analytics servers
    • Contrail Cloud Platform in the centralized deployment

    Table 2 lists the node servers and servers that have been tested for these functions in the Cloud CPE solution. You should use these specific node servers or servers for the Cloud CPE solution.

    Table 2: COTS Node Servers and Servers Tested in the Cloud CPE Solution

    Option

    Vendor

    Model

    Type

    1

    QuantaPlex

    T41S-2U 4-Node server

    Multinode server accepting 4 nodes

    2

    Supermicro

    SuperServer Model SYS-2028TPHC1TR-OTO-4

    Multinode server accepting 4 nodes

    3

    Dell

    PowerEdge R420 rack server

    1U rack-mounted server

    Software Tested for COTS Servers

    Table 3 shows the software that has been tested for the Cloud CPE solution. You must use these specific versions of the software when you implement the Cloud CPE solution.

    Table 3: Software Tested for the COTS Nodes and Servers

    Description

    Version

    Operating system for all COTS nodes and servers

    Ubuntu 14.04.5 LTS

    Operating system for VMs on Contrail Service Orchestration servers

    Ubuntu 14.04.5 LTS

    Hypervisor on Contrail Service Orchestration servers

    • Centralized deployment: Contrail Cloud Platform Release 3.0.2, or VMware ESXi Version 5.5.0
    • Distributed deployment: KVM provided by the Ubuntu operating system on the server or VMware ESXi Version 5.5.0

    Additional software for Contrail Service Orchestration servers

    Secure File Transfer Protocol (SFTP)

    Software defined networking (SDN) for a centralized deployment

    Contrail Cloud Platform Release 3.0.2 with Heat v2 APIs

    Contrail Analytics

    Contrail Release 4.0

    Network Devices and Software Tested for the Contrail Cloud Platform (Centralized Deployment)

    The Contrail Cloud Platform has been tested with:

    • The network devices described in Table 4.
    • The software described in Table 5.

      You must use these specific versions of the software for the Cloud CPE Solution 3.0.1.

    Table 4: Network Devices Tested for the Contrail Cloud Platform

    Function

    Device

    Model

    Quantity

    SDN gateway router

    Juniper Networks MX Series 3D Universal Edge Router

    MX80-48T router with
    two 10-Gigabit Ethernet (GE) XFP optics

    1

    Management switch

    Juniper Networks EX Series Ethernet Switch

    EX3300-48T switch with:

    • 48 10/100/1000 GE interfaces
    • 4 built-in 10-GE SFP transceiver interfaces

    1

    Data switch

    Juniper Networks QFX Series Switch

    QFX 5100-48S-AFI switch with:

    • 48 SFP+ transceiver interfaces
    • 6 QSFP+ transceiver interfaces

    1

    Table 5: Software Tested in the Cloud CPE Centralized Deployment

    Function

    Software and Version

    Operating system for MX Series router

    Junos OS Release 14.2R3

    Operating system for EX Series switch

    Junos OS Release 12.3R10

    Operating system for QFX Series switch

    Junos OS Release 13.2X51-D38

    Hypervisor on Contrail Service Orchestration servers

    Contrail Cloud Platform Release 3.0.2 or VMware ESXi Version 5.5.0

    Element management system software

    EMS microservice

    Junos Space Network Management Platform Release 15.1R1 (for VNFs that require this product)

    Software defined networking (SDN) for a centralized deployment

    Contrail Cloud Platform Release 3.0.2

    Contrail Analytics

    Contrail Release 4.0

    Virtualized infrastructure manager (VIM) and virtual machine (VM) orchestration

    OpenStack Liberty or Kilo

    Authentication and Authorization

    OpenStack Liberty or Kilo

    Network Functions Virtualization (NFV)

    Contrail Service Orchestration Release 3.0.1

    Network Devices and Software Tested for Use with CPE Devices (Distributed Deployment)

    The distributed deployment has been tested with:

    • The network devices described in Table 6.
    • The software described in Table 7.

      You must use these specific versions of the software when you implement the distributed deployment.

    Table 6: Network Devices Tested for the Distributed Deployment

    Function

    Device

    Model

    Quantity

    PE router and IPsec concentrator

    Juniper Networks MX Series 3D Universal Edge Router

    • MX960, MX480, or MX240 router with
      MS-MPC line card
    • MX80 or MX104 router with MX-MIC line card
    • Other MX Series routers with an MS-MPC or MX-MIC are supported

    1 per POP

    SDN gateway for SD-WAN Edge

    Juniper Networks SRX Series Services Gateway

    SRX4200 Services Gateway

    1 per POP

    CPE device

    • NFX250 Series Network Services Platform
    • SRX Series Services Gateway
    • vSRX on an x86 server
    • NFX250-LS1 device
    • NFX250-S1 device
    • NFX250-S2 device
    • SRX300 Services Gateway
    • SRX320 Services Gateway
    • SRX340 Services Gateway
    • SRX345 Services Gateway
    • vSRX 15.1X49-D100

    1 per customer site

    Table 7: Software Tested in the Distributed Deployment

    Function

    Software and Version

    Hypervisor on Contrail Service Orchestration servers

    KVM provided by the Ubuntu operating system on the server or VMware ESXi Version 5.5.0

    Authentication and Authorization

    OpenStack Mitaka

    Network Functions Virtualization (NFV)

    Contrail Service Orchestration Release 3.0.1

    Contrail Analytics

    Contrail Release 4.0

    NFX Software

    Junos OS Release 15.1X53-D47

    Routing and Security for NFX250 device

    vSRX KVM Appliance MD5 SHA1 15.1X49-D100

    Operating system for vSRX used as a CPE device on an x86 server

    vSRX KVM Appliance MD5 SHA1 15.1X49-D100

    Operating system for SRX Series Services Gateway used as a CPE device

    Junos OS Release 15.1X49-D100

    Operating system for MX Series router used as PE router

    Junos OS Release 16.1R3.00

    Operating system for SRX Series Services Gateway used as an SDN WAN gateway

    Junos OS Release 15.1X49-D100

    Minimum Hardware Requirements for the Cloud CPE Solution

    Table 2 lists the makes and models of node servers and servers that you can use in the Cloud CPE solution. When you obtain node servers and servers for the Cloud CPE Solution, we recommend that you:

    • Select hardware that was manufactured within the last year.
    • Ensure that you have active support contracts for servers so that you can upgrade to the latest firmware and BIOS versions.

    The number of node servers and servers that you require depends on whether you are installing a demo or a production environment.

    Table 8 shows the required hardware specifications for node servers and servers in a demo environment and in a trial HA environment..

    Table 8: Demo Environment or Trial HA Environment

    Function

    Demo Environment

    Trial HA Environment

    Node or Server Specification

    Storage

    Greater than 1 TB of one of the following types:

    • Serial Advanced Technology Attachment (SATA)
    • Serial Attached SCSI (SAS)
    • Solid-state drive (SSD)

    Greater than 1 TB of one of the following types:

    • SATA
    • SAS
    • SSD

    CPU

    One 64-bit dual processor, type Intel Sandybridge, such as Intel Xeon E5-2670v3 @ 2.5 GHz or higher specification

    One 64-bit dual processor, type Intel Sandybridge, such as Intel Xeon E5-2670v3 @ 2.5 GHz or higher specification

    Network interface

    One Gigabit Ethernet (GE) or 10 GE interface

    One GE or 10 GE interface

    Contrail Service Orchestration Servers (includes Contrail Analytics in a VM )

    Number of nodes or servers

    1

    3

    vCPUs

    48

    48

    RAM

    128 GB

    128 GB

    Contrail Cloud Platform for a Centralized Deployment

    Number of nodes or servers

    1

    4–8

    • 3 nodes for Contrail controller, and analytics
    • 1–4 Contrail compute nodes

    vCPUs

    16

    48

    RAM

    64 GB

    256 GB

    Table 9 shows the required hardware specifications for node servers and servers in a production environment.

    Table 9: Production Environment

    Server Function

    Values

    Node or Server Specification

    Storage

    Greater than 1 TB of one of the following types:

    • SATA
    • SAS
    • SSD

    CPU

    One 64-bit dual processor, type Intel Sandybridge, such as Intel Xeon E5-2670v3 @ 2.5 GHz or higher specification

    Network interface

    One GE or 10 GE interface

    Contrail Service Orchestration Servers

    Number of nodes or servers for a non-HA environment

    2

    • 1 central server
    • 1 regional server

    Number of nodes or servers for an HA environment

    6

    • 3 central server
    • 3 regional server

    vCPUs

    48

    RAM

    256 GB

    Contrail Analytics Server for a Distributed Deployment

    Number of nodes or servers

    1

    vCPUs

    48

    RAM

    256 GB

    Contrail Cloud Platform for a Centralized Deployment

    Number of nodes or servers

    4–28

    • 3 nodes for Contrail controller, and analytics
    • 1–25 Contrail compute nodes

    vCPUs per node or server

    48

    RAM per node or server

    256 GB

    Software and VM Requirements

    You must use the software versions that were tested in the Cloud CPE solution. This section shows the VMs required for each type of environment.

    Table 10 shows complete details about the VMs required for a demo environment. HA is not included with the demo environment.

    Table 10: Details for VMs for a Demo Environment

    Name of VM

    Components That Installer Places in VM

    Resources Required

    Ports to Open

    csp-installer-vm

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-infravm

    Third-party applications used as infrastructure services

    • 4 CPU
    • 32 GB RAM
    • 200 GB hard disk storage

    See Table 14.

    csp-central-msvm

    All microservices, including GUI applications

    • 4 vCPUs
    • 32 GB RAM
    • 200 GB hard disk storage

    See Table 14.

    csp-regional-infravm

    Third-party applications used as infrastructure services

    • 4 vCPUs
    • 32 GB RAM
    • 200 GB hard disk storage

    See Table 14.

    csp-regional-msvm

    All microservices, including GUI applications

    • 4 vCPUs
    • 32 GB RAM
    • 200 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb

    Load balancer for device to Fault Management Performance Management (FMPM) microservice connectivity

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-space-vm

    Junos Space Virtual Appliance and database—required only if you deploy virtualized network functions (VNFs) that use this EMS

    • 4 vCPUs
    • 32 GB RAM
    • 200 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 4 vCPUs
    • 32 GB RAM
    • 200 GB hard disk storage

    See Table 14.

    Table 11 shows complete details about the VMs for a trial HA environment.

    Table 11: VMs for a Trial Environment

    Name of VM or Microservice Collection

    Components That Installer Places in VM

    Resources Required

    Ports to Open

    csp-installer-vm

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-infravm1

    Third-party applications used as infrastructure services

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-infravm2

    Third-party applications used as infrastructure services

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-infravm3

    Third-party applications used as infrastructure services

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-lbvm1

    Load-balancing applications

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-lbvm2

    Load-balancing applications

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-lbvm3

    Load-balancing applications

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-msvm1

    All microservices, including GUI applications

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-msvm2

    All microservices, including GUI applications

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm1

    Third-party applications used as infrastructure services

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm2

    Third-party applications used as infrastructure services

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm3

    Third-party applications used as infrastructure services

    • 6 vCPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-msvm1

    All microservices, including GUI applications

    • 6 CPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-msvm2

    All microservices, including GUI applications

    • 6 CPUs
    • 48 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-lbvm1

    Load-balancing applications

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-lbvm2

    Load-balancing applications

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-lbvm3

    Load-balancing applications

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-space-vm

    Junos Space Virtual Appliance and database—required only if you deploy VNFs that use this EMS

    • 6 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm1

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 6 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm2

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 6 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm3

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 6 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb1

    Load balancer for device to FMPM microservice connectivity

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb2

    Load balancer for device to FMPM microservice connectivity

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb3

    Load balancer for device to FMPM microservice connectivity

    • 4 vCPUs
    • 16 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    Table 12 shows complete details about VMs and microservice collections required for a production environment without HA.

    Table 12: VMs for a Production Environment Without HA

    Name of VM or Microservice Collection

    Components That Installer Places in VM

    Resources Required

    Ports to Open

    csp-installer-vm

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-infravm

    Third -party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-msvm

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm

    Third -party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-msvm

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb

    Load balancer for device to Fault Management Performance Management (FMPM) microservice connectivity

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-space-vm

    Junos Space Virtual Appliance and database—required only if you deploy VNFs that use this EMS

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 8 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-elkvm

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-elkvm

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    Table 13 shows complete details about VMs and microservice collections required for a production environment with HA.

    Table 13: VMs for a Production Environment with HA

    Name of VM or Microservice Collection

    Components That Installer Places in VM

    Resources Required

    Ports to Open

    csp-installer-vm

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-infravm1

    Third-party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-infravm2

    Third-party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-infravm3

    Third-party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-lbvm1

    Load-balancing applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-lbvm2

    Load-balancing applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-lbvm3

    Load-balancing applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-msvm1

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-msvm2

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-central-msvm3

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm1

    Third-party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm2

    Third-party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-infravm3

    Third-party applications used as infrastructure services

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-msvm1

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-msvm2

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-msvm3

    All microservices, including GUI applications

    • 16 vCPUs
    • 64 GB RAM
    • 500 GB hard disk storage

    See Table 14.

    csp-regional-lbvm1

    Load-balancing applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-lbvm2

    Load-balancing applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-lbvm3

    Load-balancing applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-space-vm

    Junos Space Virtual Appliance and database—required only if you deploy VNFs that use this EMS

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm1

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 8 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm2

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 8 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-contrail-analytics-vm3

    Contrail Analytics for a distributed deployment

    For a centralized deployment, you specify use of Contrail Analytics in the Contrail Cloud Platform.

    • 8 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-elkvm1

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-elkvm2

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-central-elkvm3

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-elkvm1

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-elkvm2

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-elkvm3

    Logging applications

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb1

    Load balancer for device to FMPM microservice connectivity

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb2

    Load balancer for device to FMPM microservice connectivity

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    csp-regional-fmpmlb3

    Load balancer for device to FMPM microservice connectivity

    • 4 vCPUs
    • 32 GB RAM
    • 300 GB hard disk storage

    See Table 14.

    Table 14 shows the ports that must be open on all VMs in the Cloud CPE Solution to enable the following types of CSO communications:

    • External—CSO user interface (UI) and CPE connectivity
    • Internal—Between CSO components

    The provisioning tool opens these ports on each VM; however, if you provision the VMs manually, you must manually open the ports on each VM.

    Table 14: Ports to Open on VMs in the Cloud CPE Solution

    Port Number

    CSO Communication Type

    Port Function

    22

    External and internal

    SSH

    80

    internal

    HAProxy

    82

    External

    Customer Portal

    83

    External

    Network Service Designer

    443

    External and internal

    HTTPS, including Administration Portal

    1414

    internal

    Cassandra Java Virtual Machine (JVM)

    1936

    External

    HAProxy status page

    1947

    External

    Icinga service

    2181

    internal

    ZooKeeper client

    2379

    internal

    etcd client communication

    2380

    internal

    etcd peer

    2888

    internal

    ZooKeeper follower

    3000

    External

    Grafana

    3306

    internal

    MySQL

    3888

    internal

    ZooKeeper leader

    4001

    internal

    SkyDNS etcd discover

    4505, 4506

    internal

    Salt communications

    5000

    External

    Keystone public

    5044

    internal

    Beats

    5543

    internal

    Logstash UDP

    5601

    External

    Kibana UI

    5665

    internal

    Icinga API

    5671

    internal

    RabbitMQ SSL listener

    5672

    internal

    RabbitMQ client

    6000

    internal

    Swift Object Server

    6001

    internal

    Swift Container Server

    6002

    internal

    Swift Account Server

    6379

    internal

    Redis

    6543

    internal

    Virtualized Network Function manager (VNFM)

    7804

    External

    Device connectivity

    8006

    internal

    Network Service Orchestrator

    8016

    internal

    Notification engine

    8080

    internal

    cAdvisor

    8082

    internal

    Device Management Service (DMS) central

    8083

    internal

    Activation Service (AS) central

    8085

    internal

    DMS Schema

    8086

    internal

    Contrail Analytics

    8090, 8091

    internal

    Generic container

    9042

    internal

    Cassandra native transport

    9090

    internal

    Swift Proxy Server

    9160

    internal

    Cassandra

    9200

    internal

    Elasticsearch

    10248

    internal

    kubelet healthz

    15100

    internal

    Logstash TCP

    15672

    internal

    RabbitMQ management

    30000-32767

    internal

    Kubernetes service node range

    30900

    External

    Prometheus

    35357

    internal

    Keystone private

    Accessing GUIs

    Table 15 shows the URLs and login credentials for the GUIs for a non-redundant Contrail Service Orchestration installation.

    Table 15: Access Details for the GUIs

    GUI

    URL

    Login Credentials

    Administration Portal

    http://central-IP-Address:

    where:

    central-IP-Address—IP address of the VM that hosts the microservices for the central POP

    For example:

    http://192.0.2.1:

    Specify the OpenStack Keystone username and password.

    The default username is cspadmin and the default password is passw0rd.

    Customer Portal

    http://central-IP-Address:82

    where:

    central-IP-Address—IP address of the VM that hosts the microservices for the central POP

    For example:

    http://192.0.2.1:82

    Specify the credentials when you create the Customer either In Administration Portal or with API calls.

    Kibana

    http://infra-vm-IP-Address:5601

    where:

    infra-vm-IP-Address—IP address of the VM that hosts the infrastructure services for a central or regional POP

    For example:

    http://192.0.2.2:5601

    Login credentials are not needed.

    Designer Tools—Log into Network Service Designer and click the menu in the top left of the page to access the other designer tools.

    http://central-IP-Address:83

    where:

    central-IP-Address—IP address of the VM that hosts the microservices for the central POP

    For example:

    http://192.0.2.1:83

    Specify the OpenStack Keystone username and password.

    The default username is cspadmin and the default password is passw0rd.

    Service and Infrastructure Monitor

    http://central-IP-Address:1947/icingaweb2

    where:

    central-IP-Address—IP address of the VM that hosts the microservices for the central POP

    For example:

    http://192.0.2.1:1947/icingaweb2

    The default username is icinga and the default password is csoJuniper.

    VNFs Supported

    The Cloud CPE solution supports the Juniper Networks and third-party VNFs listed in Table 16.

    Table 16: VNFs Supported by the Cloud CPE Solution

    VNF Name

    Network Functions Supported

    Deployment Model Support

    Element Management System Support

    Juniper Networks vSRX

    • Network Address Translation (NAT)
    • Demonstration version of Deep Packet Inspection (DPI)
    • Firewall
    • Unified Threat Management (UTM)
    • Centralized deployment
    • Distributed deployment supports NAT, firewall, and UTM

    EMS microservice

    LxCIPtable (a free, third party VNF based on Linux IP tables)

    • NAT
    • Firewall

    Centralized deployment

    EMS microservice

    Cisco Cloud Services Router 1000V Series (CSR-1000V)

    Firewall

    Centralized deployment

    Junos Space Network Management Platform

    Riverbed Steelhead

    WAN optimization

    Distributed deployment, NFX250 platform only

    EMS microservice

    Silver Peak VX

    WAN optimization

    Distributed deployment, NFX250 platform only

    EMS microservice

    Licensing

    You must have licenses to download and use the Juniper Networks Cloud CPE Solution. When you order licenses, you receive the information that you need to download and use the Cloud CPE solution. If you did not order the licenses, contact your account team or Juniper Networks Customer Care for assistance.

    The Cloud CPE solution licensing model depends on whether you use a centralized or distributed deployment:

    • For a centralized deployment, you need licenses for Network Service Orchestrator and for Contrail Cloud Platform. You can either purchase both types of licenses in one Cloud CPE MANO package or you can purchase each type of license individually.

      You also need licenses for:

      • Junos OS software for the MX Series router, EX Series switch, and QFX Series switch in the Contrail Cloud Platform.
      • VNFs that you deploy.
      • (Optional) Licenses for Junos Space Network Management Platform, if you deploy VNFs that require this EMS.
    • For a distributed deployment, you need licenses for Network Service Orchestrator and for Network Service Controller.

      You also need licenses for the following items, depending on which you use in your deployment.

      • The vSRX application that provides the security gateway for the NFX250 device or the vSRX implementation used as a CPE device.
      • VNFs that you deploy.
      • Junos OS software for the MX Series router, including licenses for DHCP subscribers.
      • Junos OS software for the SRX Services Gateways.
    • For a combined centralized and distributed deployment, you need licenses for components for both types of deployment.

    Modified: 2017-07-25