This section lists known limitations with this release.
Known Behavior in Contrail Networking Release 2008
CEM-18922 On DPDK compute, memory of the VMs are mapped to only one numa. VM creation fails after the hugepages in that numa are exhausted if it is launched with hw:mem_page_size='any' flavor. As a workaround, use the hw:mem_page_size='large' flavor instead to avoid the issue.
CEM-18909 After a Contrail cluster is deployed, using the contrail-status command shows XMPP connection down on all compute nodes.
[heat-admin@overcloud-novacompute-2 ~]$ sudo contrail-status Pod Service Original Name Original Version State Id Status vrouter agent contrail-vrouter-agent rhel-queens-2008-109 running 02fec20b79a7 Up 6 hours ago vrouter nodemgr contrail-nodemgr rhel-queens-2008-109 running c35d4f141861 Up 7 hours ago vrouter provisioner contrail-provisioner rhel-queens-2008-109 running 1020e076fc8b Up 4 minutes agovrouter kernel module is PRESENT == Contrail vrouter == nodemgr: active agent: active (XMPP:control-node:10.0.0.163 connection down Number of connections:6, Expected: 5)
While this is a cosmetic issue and does not impact functionality, as a workaround, restart the vRouter agent container on all compute nodesto update status.
[heat-admin@overcloud-novacompute-2 ~]$ sudo docker restart 02fec20b79a7 Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg. 02fec20b79a79ae91e9c809e6d7f7b3624efec0bcfedb1e3840ebc6ce8b437d3 [heat-admin@overcloud-novacompute-2 ~]$ sudo contrail-status Pod Service Original Name Original Version State Id Status vrouter agent contrail-vrouter-agent rhel-queens-2008-109 running 02fec20b79a7 Up 5 seconds ago vrouter nodemgr contrail-nodemgr rhel-queens-2008-109 running c35d4f141861 Up 7 hours ago vrouter provisioner contrail-provisioner rhel-queens-2008-109 running 1020e076fc8b Up 5 minutes agovrouter kernel module is PRESENT == Contrail vrouter == nodemgr: active agent: active
CEM-16118 In high-scale fabric management scenarios with VMI scale above 15,000 certain Web UI interactions can experience high latency. In these high-scale clusters it is recommended to disable SSL on the Contrail Cluster API interfaces using the Enable_SSL: False flag at cluster provisioning time.
CEM-15809 Updating VLAN-ID on a VPG in an enterprise style fabric is not supported. As a workaround, delete and recreate the fabric.
CEM-15710 Contrail Insights alarms will not get generated for vRouter flows if SSL is enabled for vRouter introspect.
CEM-15873 Contrail Insights Alarm Creation for Contrail and Openstack from Contrail Command will not work. As a workaround, use Contrail Insights UI to create alarms for Contrail and OpenStack.
CEM-15764 In Octavia Load Balancer, traffic destined to the Floating IP of the load balancer VM does not get directed to the backend VMs. Traffic destined to the actual VM IP of the Load Balancer VM will work fine.
CEM-15874 While importing SSL enabled Juju cluster into Contrail Command, SSL option for telemetry and config endpoints does not get created. As a workaround, manually change the endpoints to https.
CEM-15599 In Contrail fabric manager deployments, update of static routes in InterfaceRouteTable(prefix list) of RoutingPolicy is not reflected in Routed LR. To trigger this update, edit and save the routing policy which uses this interface route table.
CEM-15567 Ansible error when there are multiple virtual networks attached to multiple logical routers.
CEM-14751 If Quota is enabled, accessing projects overview page (default page after the user logs in to Horizon) does not work and the user gets logged out. This is due to OpenStack bug https://bugs.launchpad.net/horizon/+bug/1788631. As a workaround, disable quota in Horizon by editing horizon local_settings file under OPENSTACK_NEUTRON_NETWORK group, set the enable_quotas=False, and restart the horizon container.
CEM-14264 In release 2003, the Virtual Port Group create workflow will not pre-populate the VLAN-ID with the existing value that was defined with the first VPG for a given virtual network. The field is editable unlike in previous releases. This issue occurs in a fabric that was provisioned with the Fabric-wide VLAN-ID significance checkbox enabled.
CEM-15561, CEM-13976 vRouter offload with Mellanox NIC cards does not work. However the DPDK on Mellanox NICs without offload is supported.
CEM-13767 - Though Contrail fabric manager has the ability for the user to use custom image names for the fabric devices, for platforms like QFX10000-60C which runs on vmhost-based platforms, while uploading the image to CFM, the image name should be chosen in
CEM-13685 - DPDK vRouter with MLNX CX5 takes about 10 minutes and also lcore crash is seen. This happens once during initial installation.
CEM-13380 - AppFormix Flows does not show up for multi homed devices on the fabric
CEM-12861 - Flow to VN mapping using Contrail Insights Flows does not work for any traffic involving BMS traffic end points.
CEM-11163 In Fortville X710 NIC: With TX and RX buffers performance degrade is observed as mbufs gets exhausted.
CEM-10929 - When Contrail Insights is querying LLDP table from a device through SNMP, if SNMP calls time out, Contrail Insights marks the device as invalidConfiguration and notifies the user to take a look. When the user verifies that snmpwalk is working and there are no network issues, click Edit and reconfigure that device from Settings > Network Devices to make Contrail Insights try to run LLDP discovery and add this device again.
CEM-9979 During upgrade of DPDK computes deployed with OOO Heat Templates in RHOSP environment, vRouter coredumps are observed. This is due to the sequence in which the services are started during upgrade and does not have impact on cluster operation.
CEM-8701 Whil bringing up a BMS using the Life Cycle Management workflow, sometimes on faster servers the re-image does not go through and instance not moved from ironic vn to tenant vn. This is because if the PXE boot request from the BMS is sent before the routes are converged between the BMS port and the TFTP service running in Contrail nodes. As a workaround, the servers can be rebooted or the BIOS in the servers can be configured to have a delayed boot.
CEM-8701, CEM-8149 - Onboarding of multiple BMS in parallel on SP-style fabric does not work.
CEM-4370 - Additional links cannot be appended to service templates used to create PNF service chaining. If there is a need to add additional links, the service template needs to be deleted and re-added again.
CEM-4358 - In Contrail fabric deployments configuring QFX5110 as spine (CRB-Gateway) does not work.
CEM-8149 BMS LCM with fabric set with enterprise_style=True is not supported. By default, enterprise_style is set to False. Avoid using enterprise_style=True if the fabric object onboards the BMS LCM instance.
CEM-7874 User defined alarms may not be generated, when third stunnel/Redis service instance is down after the first two instances were restarted.
CEM-5788 Installation fails if FQDN is used to deploy Contrail Cluster through Contrail Command with OpenStack orchestration.
CEM-5141 For deleting compute nodes, the UI workflow will not work. Instead, update the instances.yaml with “ENABLE_DESTROY: True” and “roles:” (leave it empty) and run the following playbooks.
ansible-playbook -i inventory/ -e orchestrator=openstack --tags nova playbooks/install_openstack.yml ansible-playbook -i inventory/ -e orchestrator=openstack playbooks/install_contrail.yml
global_configuration: ENABLE_DESTROY: True ... ... instances: ... ... srvr5: provider: bms ip: 19x.xxx.x.55 roles: ... ...
CEM-5043 VNI update on a LR doesnt update the RouteTable. As a workaround, delete the LogicalRouter and create a new LogicalRouter with the new VNI.
CEM-3959 BMS movement across TORs is not supported. To move BMS across TORs the whole VPG needs to be moved. That means if there are more than one BMS associated to one VPG, and one of the BMS need to be moved, the whole VPG need to be deleted and re-configured as per the new association.
JCB-187287 High Availability provisioning of Kubernetes master is not supported.
JCB-184776 When the vRouter receives the head fragment of an ICMPv6 packet, the head fragment is immediately enqueued to the assembler. The flow is created as hold flow and then trapped to the agent. If fragments corresponding to this head fragment are already in the assembler or if new fragments arrive immediately after the head fragment, the assembler releases them to flow module. Fragments get enqueued in the hold queue if agent does not write flow action by the time the assembler releases fragments to the flow module. A maximum of three fragments are enqueued in the hold queue at a time. The remaining fragments are dropped from the assembler to the flow module.
As a workaround, the head fragment is enqueued to assembler only after flow action is written by agent. If the flow is already present in non-hold state, it is immediately enqueued to assembler.
JCB-177787 In DPDK vRouter use cases such as SNAT and LBaaS that require netns, jumbo MTU cannot be set. Maximum MTU allowed: <=1500.