Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?


New and Changed Features

The features and enhancements listed in this section are new or changed as of Contrail Release 4.1. A brief description of each new feature is included.

New and Changed Features in Contrail Release 4.1.5

There are no new features in Contrail Release 4.1.5.

New and Changed Features in Contrail Release

There are no new features in Contrail Release

New and Changed Features in Contrail Release 4.1.4

There are no new features in Contrail Release 4.1.4.

New and Changed Features in Contrail Release 4.1.3

There are no new features in Contrail Release 4.1.3.

New and Changed Features in Contrail Release 4.1.2

The feature listed in this section is new as of Contrail Release 4.1.2.

Support for SmartNIC from Netronome

Contrail Release 4.1.2 supports Netronome SmartNIC. You can use Juju to deploy Contrail Release 4.1.2 and later with Netronome SmartNICs. The Netronome SmartNIC improves Contrail SDN performance, by saving host resources and providing a stable, high-performance infrastructure.

The Netronome SmartNIC has all server-side networking features, such as overlay networking based on MPLS over UDP/GRE and VXLAN. It supports DPDK, SR-IOV, and Express Virtio (XVIO) for data plane acceleration.

New and Changed Features in Contrail Release 4.1.1

The feature listed in this section is new as of Contrail Release 4.1.1.

Support for Flat Provider Network on SR-IOV Virtual Functions

Contrail Release 4.1.1 supports configuration of VLAN ID 0 on single-root I/O virtualization (SR-IOV) virtual functions to allow multiple VLAN traffic to a virtual machine (VM) running over a single SR-IOV interface.

Support for SR-IOV, DPDK and vRouter on RHEL

Contrail Release 4.1.1 supports SR-IOV, Data Plane Developer Kit (DPDK) and the Contrail vRouter kernel module on Red Hat Enterprise Linux (RHEL) operating systems.

New and Changed Features in Contrail Release 4.1

The features listed in this section are new as of Contrail Release 4.1.

Using Huge Pages to Facilitate vRouter Hash Table Handling

To facilitate vRouter handling of flow and bridge tables at bootup, Contrail Release 4.1 requires the user to enable huge pages (1G in Linux), so that sufficient contiguous memory is available to the vrouter module. Huge page allocation and usage for the vrouter is in the kernel space. Enable huge pages at installation to use this feature.

Simple Underlay Connectivity without Gateway

For simple enterprise use cases and public cloud environments, it is possible to directly route packets using the IP fabric network without using an SDN gateway.

The following features can be enabled when using this method:

  • Network policy support for IP fabric

  • Security groups for VMs and containers on IP fabric

  • Security groups for vhost0 interface, to protect compute node or bare metal server applications

  • Support for service chaining, if policy dictates that traffic goes through a service chain.

See Simple Underlay Connectivity without Gateway.

Contrail Support for SR-IOV on RHEL

Starting in Release 4.1, Contrail supports single root I/O virtualization (SR-IOV) on Red Hat Enterprise Linux (RHEL) operating systems. Contrail Release 3.0 through Release 4.0 supported SR-IOV on Ubuntu systems only.

For more information, see Configuring Single Root I/O Virtualization (SR-IOV).

Bidirectional Forwarding and Detection Health Check over Virtual Machine Interfaces

Contrail Release 4.1 supports BFD-based health check for VMIs.

Health check for VMIs is already supported in earlier releases as poll-based checks with ping and curl commands. When enabled, these health checks run periodically, once every few seconds. Consequently, failure detection times can be quite large and are always in seconds.

Health checks based on the BFD protocol can provide failure detection and recovery in sub-second intervals, because applications are notified immediately upon BFD session state changes.

See Service Instance Health Checks.

Bidirectional Forwarding and Detection Health Check for BGPaaS

Contrail Release 4.1 adds support for BFD-based health check for BGP as a Service (BGPaaS) sessions.

The BFD-based health check over VMIs, also introduced in Contrail Release 4.1, cannot be directly used for a BGPaaS session, because the session shares a tenant destination address over a set of VMIs, with only one VMI active at any given time.

When configured, any time a BFD-for-BGP session is detected as down by the health checker, corresponding logs and alarms are generated.

To enable this health check, configure the ServiceHealthCheckType property and associate it with a bgp-as-a-service configuration object. This can also be accomplished in the Contrail WebUI.

See Service Instance Health Checks.

Health Check of Transparent Service Chain

Contrail Release 4.1 enhances service chain redundancy by implementing an end-to-end health check for the transparent service chain. The service health check monitors the status of the service chain and if there is a failure, the control node no longer considers the service chain as a valid next hop, triggering traffic failover.

A segment-based health check is used to verify the health of a single instance in a transparent service chain. The user creates a service-health-check object, with type segment-based, and attaches it to either the left or right interface of the service instance. The service health-check packet is injected to the interface to which it is attached. When the packet comes out of the other interface, a reply packet is injected on that interface. If health check requests fail after 30-second retries, the service instance is considered unhealthy and the service VLAN routes of the left and right interfaces are removed. When the agent receives health-check replies successfully, it adds the retracted routes back on both interfaces, which triggers the control node to start reoriginating routes to other service instances on that service chain.

See Service Instance Health Checks.

More Efficient Flow Queries

Flow queries are now analyzed on a 7-tuple basis, enabling more efficient flow queries by focusing on elements more important for analysis, and de-emphasizing lesser elements. More efficient queries enable load reduction and allow application of security policy.

An enhanced security framework is implemented to manage connectivity between workloads, or VMIs. Each VMI is tagged with the attributes of Deployment, App, Tier, and Site, and the user specifies security policies for VMIs using the values of these tags.

The existing FlowLogData is replaced by SessionEndpointData, and a SessionAggregate map provides statistics about the flow sessions and the security tags. Session data can belong to either Sampled or Logged Flows. SessionAggregates are sent to configurable destinations, including collector, local log, and syslog.

RBAC for Analytics API and WebUI—Beta

Role-based access control (RBAC) for analytics API provides the ability to access UVE and query information based on the permissions of the user for the UVE or queried object. Previously, the analytics API supported authenticated access only for the cloud-admin role. However, to display network monitoring for tenant pages in the UI, the analytics API now supports RBAC (similar to that of the config API) so that tenants can view information about the networks for which they have the read permissions. Tenants will not be able to view system logs and and flow logs, which are only viewable by the cloud-admin role. A non-admin user will be able to see only non-global UVEs.

In the /etc/contrail/contrail-analytics-api.conf, the section DEFAULTS, the parameter aaa_mode now supports rbac as one of the values.

See Role-Based Access Control for Analytics.

Security Policy Enhancements

As the Contrail environment has grown and become more complex, it has become harder to achieve desired security results with the existing network policy and security group constructs. The Contrail network policies have been tied to routing, making it difficult to express security policies for environments such as cross sectioning between categories, or having a multi-tier application supporting development and production environment workloads with no cross environment traffic.

Contrail 4.1 introduces new firewall security policy objects, including the following enhancements:

  • Routing and policy decoupling—introducing new firewall policy objects, which decouples policy from routing.

  • Multi dimension segmentation—segment traffic and add security features, based on multiple dimensions of entities, such as Application, Tier, Deployment, Site, UserGroup.

  • Policy portability—security policies can be ported to different environments, such as ‘from development to production’, ‘from pci-complaint to production’, ‘to bare metal environment’ and ‘to container environment’.

See Security Policy Enhancements.

Allocation of Service Instance IP

In service chaining version 2, for scaling up, the contrail-svc-monitor allocates a service instance IP address from the same subnet currently in use. If the scaling is not required, the IP is wasted, from a limited pool of IPs.

Starting with Contrail 4.1, any new service instance allocates IPs from a different subnet, by using a fixed value for ther IP, allocated from and ::ffff/104 for IPv4 and IPv6.

Existing service instances retain use of the previous method of allocating IPs; new instances make use of the new allocation method.

Long-Lived Graceful Restart for XMPP

Contrail Release 4.1 introduces support for long-lived graceful restart (LLGR) with XMPP helper mode. Previous versions of Contrail provided only the BGP helper mode. Graceful restart and long-lived graceful restart can be enabled using the Contrail web UI or by using the provision_control script.

In the web UI, you can control the helper modes at Configure > Infrastructure Global Config > Edit BGP Options, see Figure 1.

Figure 1: Edit BGP Options Page Edit BGP Options Page

The helper modes can also be enabled via schema, and can be disabled selectively in a Contrail control node for BGP or XMPP sessions by configuring gr_helper_disable in the /etc/contrail/contrail-control.conf configuration file.

For more information, see Configuring Graceful Restart and Long-lived Graceful Restart.

Proxy Encryption of Interactions of vRouter and Nova API

OpenStack allows VMs to access metadata by sending an HTTP request to the link local address The request is proxied to Nova API and HTTP header fields are added, which Nova uses to identify the source instance and respond with appropriate metadata. In Contrail, the vRouter is the proxy, trapping the metadata requests, adding the header fields, and sending the requests to the Nova API server. Previously, these requests were not encrypted, posing a security risk.

In Contrail 4.1, SSL is used to encrypt the HTTP interactions between the Contrail vRouter and Nova API.

To enable this encryption on the Nova side, add the following configuration in the default section of the nova.conf file.

To enable this encryption on the Contrail vrouter agent, add the following configuration in the METADATA section of contrail-vrouter-agent.conf.

Contrail provisioning is updated to populate the configuration files and to copy the certificate files to the appropriate paths.

Contrail EVPN-VXLAN Support Using QFX Series Switches

Contrail Release 4.1 enables you to use Ethernet VPN (EVPN) with Virtual Extensible LAN protocol (VXLAN) encapsulation when you have an environment that includes both virtual and bare metal devices. MX Series routers use EVPN-VXLAN encapsulation to provide both Layer 2 and Layer 3 connectivity for end stations within a Contrail virtual network (VN).

Two types of encapsulation methods are used in virtual networks:

  • MPLS-over-GRE (generic routing encapsulation) is used for Layer 3 overlay virtual network routing between Contrail and MX Series routers.

  • EVPN-VXLAN is used for Layer 2 overlay virtual network connectivity between virtual machines on Contrail, bare-metal servers attached to QFX Series switches, and their respective Layer 3 gateway configured on the QFX Series switch. Subsequently, inter-VXLAN routing between virtual machines and bare-metal servers, and between bare-metal servers on different VXLAN network identifiers (VNIs), is performed on the QFX Series switch.

For more information, see EVPN-VXLAN Support for Bare Metal Devices and QFX Device Configuration.