This section lists known limitations with this release.
CEM-8149 BMS LCM with fabric set with enterprise_style=True is not supported. By default, enterprise_style is set to False. User should avoid using enterprise_style=True if the fabric object will onboard BMS LCM instance.
CEM-8026 Contrail multicloud credentials are insecurely stored in release 1908. Contrail multicloud currently supports only clouds where provider is unique for all clouds. Only one cloud can exist within the same provider. To obtain credential files for Azure (accessToken.json, azureProfie.json) user must run the following command from the desktop terminal:
az login –tenant <enter_tenant_name>
az login --tenant contrailmirrorgmail.onmicrosoft.com
User must download AWS secret key and AWS access key to CSV and save it in two separate files (values only). Those files are needed during provisioning of Contrail multicloud.
CEM-7943 4-byte ASN support cannot be enabled during provisioning. To configure 4-byte ASN, post provisioning, user must use new UI and rest-api to enable 4-Byte ASN and then configure 4-Byte ASN number.
CEM-7874 User defined alarms may not be generated, when third stunnel/Redis service instance is down after the first two instances were restarted.
CEM-5441 On a freshly provisioned Contrail + Appformix cluster, to enable the live data streaming the web sockets between Contrail UI and Appformix server need to be established. In release 1907 this needs to be triggered once by login to the Appformix UI.
CEM-5334 The multi cloud gateway on the cloud will allow traffic from only a vRouter or Controller nodes to reach to the On-Prem cluster. So in case of deployment where the On-Prem open stack cluster need to be extended to the K8s cluster on the cloud, the k8s master must be defined in one of the vRouters on the cloud.
CEM-5284 Cloud Compute/vrouter nodes will not be listed in the cluster-nodes/compute node page, all nodes/computes will be listed in the servers page
CEM-5141 For deleting compute nodes, the UI workflow will not work. Instead, update the instances.yaml with “ENABLE_DESTROY: True” and “roles:” (leave it empty) and run the following playbooks.
ansible-playbook -i inventory/ -e orchestrator=openstack --tags nova playbooks/install_openstack.yml ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml
global_configuration: ENABLE_DESTROY: True ... ... instances: ... ... srvr5: provider: bms ip: 19x.xxx.x.55 roles: ... ...
CEM-5290 While adding AWS cloud to an already existing public cloud with Azure, the AWS credentials need to be manually added in Contrail-Command container. Perform the following steps to add AWS credentials manually.
- Log in to the contrail_command container.
docker exec -it contrail_command bash
- Get the public cloud UUID.
contrailcli list cloud
- Use the following command to get the cloud_user_refs for
the <public_cloud_uuid> public cloud UUID.
contrailcli show cloud <public_cloud_uuid> | grep -A 4 cloud_user_refs
cloud_user_refs: uuid: <cloud_user_ref> to: sol4-public-cloud-user-<cloud_user_ref> href: ""
- Replace the UUID in the
cloud_user.yamlwith the <cloud_user_ref> UUID of your cluster.
cat <<EOF > cloud_user.yaml
resources: data: uuid: “<cloud_user_ref>” aws_credential: access_key: XXXXXXX secret_key: YYYYYYYYYYYYY kind: cloud_user operation: UPDATE EOF
- Use the following command to sync the
contrailcli sync cloud_user.yaml
- Verify that the credentials are updated.
contrailcli show cloud_user <cloud_user_ref>
The instance name or the hostname must be in lowercase so that it is consistent across all components.
- Log in to the contrail_command container.
CEM-5282 When Azure cloud is extended to On-Prem cluster running on RHEL hosts, contrail-status shows vRouters running on Azure as initializing, though the services are up. This is due to the Red Hat issue https://access.redhat.com/solutions/2766251.
CEM-5043 VNI update on a LR doesnt update the RouteTable. As a workaround, delete the LogicalRouter and create a new LogicalRouter with the new VNI.
CEM-5042 Adding new subnet on an already provisioned VPC is not supported. If all the subnets are added during initial bringup of VPC, nodes can be added incrementally to the subnets anytime.
CEM-5041 Provisioning of Region or VPC objects only on the cloud without any nodes is not supported. Add atleast one node while provisioning Region/VPC.
CEM-5024 Current multi cloud provisioning does not enable the On-prem TOR to exchange public cloud subnets with the On-Prem controllers. The user need to add static routes on the controllers to all the public cloud subnets.
CEM-4943 After deleting and reprovisioning public cloud infra, though the nodes get deleted from the cloud, the API server and Kubernetes will have stale entries for the deleted objects. To clean up the stale entries, run the following housekeeping scripts:
- Log in to the command container.
- Navigate to the
- Run the following script.
TF_STATE=/root/contrail-multi-cloud/terraform.tfstate INVENTORY=inventories/inventory.yml TOPOLOGY=/root/contrail-multi-cloud/topology.yml ./housekeeper.sh
If you run the script after provisioning, ensure that TF_STATE is the backup file. For example:
TF_STATE=/root/contrail-multi-cloud/terraform.tfstate.backup INVENTORY=inventories/inventory.yml TOPOLOGY=/root/contrail-multi-cloud/topology.yml ./housekeeper.sh
CEM-4941 The multicloud gateway on the public cloud cannot be shared across different subnets. Each subnet must have its own gateway.
CEM-4865 Provisioning of Contrail Controllers on public cloud is not supported. Controllers need to be provisioned On-prem.
CEM-4467 On DPDK computes, sometimes VM creation fails with "Connection is closed" error. The issue is not related to any of the contrail components. It is related to systemd-machined service in registering VMs. As a workaround, restart the systemd-machined service to fix the issue.
CEM-4381 Contrail Fabric device manager tasks can fail if one or more Contrail API servers is down. Contrail-status on the Contrail config nodes can be used to determine if this situation occur.
CEM-4370 After creating a PNF Service Instance, the fields like PNF eBGP ASN*, RP IP Address, PNF Left BGP Peer ASN*, Left Service VLAN*, PNF Right BGP Peer ASN* ,Right Service VLAN* cannot be modified. If there is a need to modify these values, delete and re-create the Service Instance with intended values.
CEM-4190 IPtables rules are not updated on MC-GW nodes. As a workaround, you must configure IPtables on the on-premise MC-GW nodes with INPUT and FORWARD and default ACCEPT policy.
CEM-3959 BMS movement across TORs is not supported. To move BMS across TORs the whole VPG need to be moved. That means if there are more than one BMS associated to one VPG, and one of the BMS need to be moved, the whole VPG need to be deleted and re-configured as per the new association.
CEM-3913 From release 1908, the Contrail services run as root user. However agent, dpdk and nodemgr services will still run as root as those services need root access to the system.
CEM-3324 Users cannot provision Contrail Cluster entirely in Public cloud. Contrail Cluster need to be On-Prem and vRouters can be extended to public cloud.
JCB-204796 In a Helm-based provisioned cluster, VM launch fails if MariaDB replication is set to >1.
JCB-202874 After deleting a vRouter chart with DPDK, the NICS do not rebind to the host in Helm.
JCB-190956 While creating ironic-provision, service address in the subnet must be pointing to openstack ironic node ip/kolla internal vip.
JCB-187320 On a DPDK compute vif list –rate core-dumps with traffic.
JCB-187287 High Availability provisioning of Kubernetes master is not supported.
JCB-186493 When a snapshot of an active VM fails, shutdown the VM before generating the snapshot.
JCB-184837 After provisioning Contrail by using a Helm-based provisioned cluster, restart nova-compute container.
JCB-184776 When the vRouter receives the head fragment of an ICMPv6 packet, the head fragment is immediately enqueued to the assembler. The flow is created as hold flow and then trapped to the agent. If fragments corresponding to this head fragment are already in the assembler or if new fragments arrive immediately after the head fragment, the assembler releases them to flow module. Fragments get enqueued in the hold queue if agent does not write flow action by the time the assembler releases fragments to the flow module. A maximum of three fragments are enqueued in the hold queue at a time. The remaining fragments are dropped from the assembler to the flow module.
As a workaround, the head fragment is enqueued to assembler only after flow action is written by agent. If the flow is already present in non-hold state, it is immediately enqueued to assembler.
JCB-177787 In DPDK vRouter use cases such as SNAT and LBaaS that require netns, jumbo MTU cannot be set. Maximum MTU allowed: <=1500.
JCB-177541 When you receive an error message during Kolla provisioning, rerunning the code will not work. In order for the provisioning to work, restart provisioning from scratch.
JCB-171466 Metadata SSL works only in HA deployment mode.
JCB-163773 A false alarm for config service is generated when config and configdb services are installed on different nodes. Ignore the false alarm.
JCB-162927 SR-IOV with DPDK co-existence deployment is not supported using contrail-helm-deployer.