Reference Architecture Variations
This section provides a walkthrough of variations to the Contrail Cloud reference architecture.
Contrail Cloud Reference Architecture Variations
Contrail Cloud can be deployed with simpler server and network configurations in environments where performance and resilience requirements are more relaxed.
This section provides information about these architectural variations.
- Supported Reference Architecture Variations Summary
- Single Bond Interface in Variation Architectures
- Layer 2 Networks Between Racks
- Single Controller Node
- Proof of Concept Environments with High Availability
- Single Controller Node
- Underlay Routing Between Leaf Switches
Supported Reference Architecture Variations Summary
Table 1 lists supported variations for this reference architecture:
| Architectural Variation | Comment |
Controller hosts in same rack |
The controller networks don’t need to be stretched between racks, but there is an increased risk of outage. No change to the configuration files is necessary for this variation. |
Separate OpenStack and Contrail controller hosts |
Use this variation in environments where you want to reduce the impact of a node failure. |
Separate Controller and Analytics, AppFormix hosts |
To increase performance in this variation:
|
Use of NICs on NUMA 0 and NUMA 1 |
Intel architectures have become more flexible for cross-NUMA traffic from DPDK core to NIC, but these configurations have less throughput than the recommendation of both NICs and DPDK cores on NUMA 0. |
Single bond interface on servers |
Use in cases where there are no separate storage nodes, or where network traffic is light and there is a low risk of contention causing packet drops. Note that DPDK for the Tenant network cannot share an interface, so DPDK mode cannot be used in this configuration. |
Single subnet across racks for Tenant, Storage, Storage Mgt, and Internal API traffic |
Use in smaller environments where per-rack addressing is not a requirement |
Use the same network for External, Management, and Intranet traffic |
Network sharing can be used in non-production networks like labs and POCs, but this variation is not recommended in production environments. |
Single Bond Interface in Variation Architectures
When servers with a single bond interface are to be used, each of the networks in the overcloud-nics.yml file is specified to be present on the same bond. The configuration is performed in the controller_network_config hierarchy and in each of the compute[leaf][h/w profile] and storage[leaf][h/w profile] hierarchies.
Leaf switch ports must be configured as follows for connections to the bond interfaces of each node type:
| Connected node | VLANs |
Controller |
TenantStorageInternal APIExternal |
Compute |
TenantStorageInternal API |
Storage |
StorageStorage Mgmt |
The following diagrams illustrate connectivity for this architecture.


The following is a full configuration network snippet for a compute node in the overcloud-nics.yml file.
#[role][leaf number][hardware profile tag]_network_config
# e.g.
ComputeDpdk0Hw1_network_config
# Provisioning interface definition
- type: interface
name: nic1 # br-eno1
dns_servers:
get_param: DnsServers
use_dhcp: false
mtu:
get_param: ControlPlaneNetworkMtu
addresses:
- ip_netmask:
list_join:
- '/'
- - get_param: ControlPlaneIp
- get_param: ControlPlaneSubnetCidr
routes:
-
ip_netmask: 169.254.169.254/32
next_hop:
get_param: EC2MetadataIp
-
default: True #Default route via provisioning, e.g. to access Satellite
next_hop:
get_param: ControlPlaneDefaultRoute
# Management interface definition
- type: interface
name: nic2 # br-eno2
mtu:
get_param: ManagementNetworkMtu
addresses:
- ip_netmask:
get_param: ManagementIpSubnet
routes:
-
ip_netmask: 10.0.0.0/8 #Address pool of corporate network that have access to
#servers via management network
next_hop: 192.168.0.1
# br-bond0 interface definition (for all networks except provision and management tagged)
- type: linux_bond
name: bond0 # br-bond0
use_dhcp: false
bonding_options: "mode=802.3ad xmit_hash_policy=layer3+4 lacp_rate=fast updelay=1000 miimon=100"
members:
- type: interface
name: nic3
mtu:
get_param: Tenant0NetworkMtu
primary: true
- type: interface
name: nic4
mtu:
get_param: Tenant0NetworkMtu
- type: vlan
vlan_id:
get_param: Tenant0NetworkVlanID
device: bond0
- type: contrail_vrouter
name: vhost0
use_dhcp: false
members:
-
type: interface
name:
str_replace:
template: vlanVLANID
params:
VLANID: {get_param: Tenant0NetworkVlanID}
use_dhcp: false
addresses:
- ip_netmask:
get_param: Tenant0IpSubnet
mtu:
get_param: Tenant0NetworkMtu
routes:
-
ip_netmask:
get_param: TenantSupernet
next_hop:
get_param: Tenant0InterfaceDefaultRoute
- type: vlan
device: bond0
vlan_id:
get_param: Storage0NetworkVlanID
mtu:
get_param: Storage0NetworkMtu
addresses:
- ip_netmask:
get_param: Storage0IpSubnet
routes:
-
ip_netmask:
get_param: StorageSupernet
next_hop:
get_param: Storage0InterfaceDefaultRoute
- type: vlan
device: bond0
vlan_id:
get_param: InternalApi0NetworkVlanID
mtu:
get_param: InternalApi0NetworkMtu
addresses:
- ip_netmask:
get_param: InternalApi0IpSubnet
routes:
-
ip_netmask:
get_param: InternalApiSupernet
next_hop:
get_param: InternalApi0InterfaceDefaultRouteLayer 2 Networks Between Racks
The same subnet address can be used across racks in small Contrail Cloud deployments.
Figure 3 illustrates networking for control hosts using layer 2 to stretch across racks in a Contrail Cloud deployment. Figure 4 illustrates networking for compute and storage nodes using layer 2 to stretch across racks.


This Layer 2 addressing scheme is not recommended for environments with a large number of devices. Layer 2 stretch can be achieved using trunking between switches, or VXLAN if additional scalability is needed. Use of a separate management switch is optional.
For this type of deployment, a single network of each type is defined and no supernet is specified.
The following configuration snippet from the site.yml file illustrates this deployment.
network:
external:
cidr: "192.168.176.0/21"
vlan: 305
vip: "192.168.183.200"
pool:
start: "192.168.176.11"
end: "192.168.183.199"
mtu: 9100
internal_api:
vlan: 304
cidr: "192.168.168.0/21"
vip: "192.168.175.200"
pool:
start: "192.168.168.11"
end: "192.168.175.199"
mtu: 9100
tenant:
vlan: 301
cidr: "192.168.144.0/21"
pool:
start: "192.168.144.11"
end: "192.168.151.199"
mtu: 9100
storage:
vlan: 303
cidr: "192.168.160.0/21"
pool:
start: "192.168.160.11"
end: "192.168.167.199"
mtu: 9100
storage_management:
vlan: 302
cidr: "192.168.152.0/21"
pool:
start: "192.168.152.11"
end: "192.168.159.199"
mtu: 9100
management:
vlan: "0"
cidr: "192.168.72.0/21"
default_route: "192.168.79.254"
pool:
start: "192.168.72.11"
end: "192.168.79.199"
mtu: 9100Leaf switch ports are configured with VLANs in the same way as described in the previous section.
Single Controller Node
Small scale experimental and lab deployments of Contrail Cloud can have a single controller host. This is configured by having a single entry in the control_host_nodes: hierarchy within the control-host-nodes.yml file.
Proof of Concept Environments with High Availability
For proof-of-concept trial environments, the following is the minimum Contrail Cloud environment that can be configured with High Availability support:
Jumphost
3 controls hosts
2 compute nodes, which can be used to validate routing and tunnels
3 storage nodes (optional)
Simplified networking can be implemented with the following components:
IPMI connectivity from the jumphost
Single network connection from each server to a switch
Provision network configured as untagged on the interface
Other networks configured with VLANs on the interface
VLANs configured in switch to span between servers
This setup supports testing of most Contrail Networking features.
Single Controller Node
Small-scale Contrail Cloud environments—including experimental or controlled lab deployments—can be established with a jumphost, a single control host, and one or more compute nodes.
To configure this type of small-scale environment, include a single entry in the control_host_nodes: hierarchy in the control-host-nodes.yml file.
Underlay Routing Between Leaf Switches
You can configure routing between leaf switches in the underlay fabric network to simplify leaf switch configuration.
Figure 5 illustrates leaf switch routing in the underlay network.

The IRB interfaces for the leaf device subnets are configured but are not placed in VRF instances. Traffic, therefore, is routed using routes in the inet.0 global routing table on each switch. A route to each IRB interface is advertised between the leaf switches using iBGP.
Supported Variations Requiring Additional Approval
The following variations can be supported in production environments, but the variations must be explicitly approved by Juniper Networks to receive full customer support. Email mailto:sre@juniper.net or contact your Juniper representative before deploying these variations to ensure your Contrail Cloud environment remains in compliance with your support agreement.
Engagement with the Juniper Networks professional services team is typically required to deploy these variations.
Variations that Require Approval Overview
The following variations can be supported in production environments. Email mailto:sre@juniper.net or contact your Juniper Networks representative before deploying these variations to ensure your Contrail Cloud environment remains in compliance with your support agreement.
Table 3 lists these variations.
| Variation | Explanation |
Use of VLANs instead of EVPN VXLAN, including the use of MC-LAG for server connectivity |
Use in labs, POCs and smaller production environments where VLAN configuration on switches is manageable and the limitations of STP are not impactful. |
Collapsed spine/gateway |
Configuring SDN gateway function in spine switches is possible providing the spine supports the required functionality and scale (number of externally connected VRFs) |
Single leaf switch per rack |
For truly cloud-native applications which are resilient to infrastructure failures |
Non-IP CLOS connectivity |
No management switches - for lab environments only |
Single controller node |
Use for labs and training for feature testing. Not supported for production environments |
Contrail Cloud 13 releases do not support all-in-one deployments where a single node supports both controller and compute functions. Storage nodes also need to be separate devices.
The following sections provide information on how the configuration of Contrail Cloud can be modified to support these architectural variations.
