Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Known Issues

This section lists known limitations with this release. Bug numbers are listed and can be researched in Launchpad.net at https://bugs.launchpad.net/​ .

Storage:

  • 1497047 In Contrail Release 2.20 and earlier, if a Cassandra node is offline for one minute or longer and then brought back online, it might corrupt the database.

    In Contrail Release 2.21 and later, a Cassandra node can be offline for up to three hours and then brought back online without corrupting the database.

    If the Cassandra node is offline for more than three hours, you need to perform the following procedure:

    • After the Cassandra node joins the Cassandra cluster, you must use the nodetool repair command
    • If the Cassandra node is offline for more than ten days, it should not be brought back online. Instead, you need to remove the Cassandra node using the nodetool removenode command and the associated procedure. The procedure can be access at: http://docs.datastax.com/en/cassandra/1.2/cassandra/operations/ops_remove_node_t.html
    • After the procedure is complete, you can add the node back as a new node.

Contrail Networking:

  • 1525368 schema stuck in 'initializing' state after upgrade from r2.1-55.
  • 1524582 With continuous TCP session setup/destroy over a long time, few flows stuck in Hold state.
  • 1524535 Provisioning of Ubuntu 12.04 fails due to upgrade_kernel and facter.
  • 1514312 [2.21-Build 102] Can not create virtual-network by Web UI after updating project-name.
  • 1496606 You use the fab install_new_contrail and fab join_cluster commands to add a new control node to a cluster that is already provisioned.

    The fab join_cluster command succeeds only if the newly added control node is “up” in the rabbitmqctl cluster_status command output. Also before purging an existing control node, you need to verify if the control node is displayed in the rabbitmqctl cluster_status command output.

    For example:

    root@a12c4s2:/opt/contrail/utils# rabbitmqctl cluster_status
    Cluster status of node 'rabbit@a12c4s2-ctrl' ...
    [{nodes,[{disc,['rabbit@a12c3s3-ctrl','rabbit@a12c3s4-ctrl',
                    'rabbit@a12c4s2-ctrl']}]},
     {running_nodes,['rabbit@a12c3s4-ctrl','rabbit@a12c3s3-ctrl',
                     'rabbit@a12c4s2-ctrl']},
     {cluster_name,<<"rabbit@a12c3s3">>},
     {partitions,[]}]
    
    root@a12c4s2:/opt/contrail/utils#
    mysql -uroot -p$(cat /etc/contrail/mysql.token) -e "show status like 'wsrep%'"
    | wsrep_cert_index_size | 41 |
    | wsrep_causal_reads | 145146 |
    | wsrep_incoming_addresses | 5.5.5.5:3306,5.5.5.6:3306,5.5.5.4:3306 |
    | wsrep_cluster_conf_id | 60 |
    | wsrep_cluster_size | 3 |
    | wsrep_cluster_state_uuid | 3c0286 
    

    Verify that the hostname of the new control node is listed in the rabbitmqctl cluster_status command output and the IP address of the new control node is listed in the wsrep_incoming_addresses field.

  • 1496605 When adding a new control node using the fab install_new_contrail command, the command expects the new control node to be added to the end of each role definition in the testbed.py file.

    For example, in the following testbed.py file example, host2 is the newly added control node.

    # Role definition of the hosts.
    env.roledefs = {
        'all': [ host3, host4,host5, host1,host2],
        'cfgm': [ host3,host5, host1,host2],
        'openstack': [ host3,host5, host1,host2],
        'control': [ host3,host5, host1,host2],
        'compute': [host4],
        'collector': [ host3,host5, host1,host2],
        'webui': [ host3,host5, host1,host2],
        'database': [ host3,host5, host1,host2],
        'build': [host_build],
    }
    

    This constraint might be removed in a future release.

  • 1491644 When bare metal servers are behind an MX Series router, MX redundancy is provisioned in the network, and a bare metal server pings another bare metal server, the ARP cache of the first bare metal server for the second bare metal server is poisoned with the vRouter compute node's MAC address This leads to connectivity failure between the two bare metal server.

    The cause is that when the ARP request from BMS1 is flooded to a compute node by the MX Series router, the vRouter does the source IP address lookup for the bare metal server IP address in the inet (IPv4) route table. This lookup results in the subnet route pointing to the ECMP next hop of two MX Series routers. This makes the vRouter respond with the virtual host's MAC address to force the packets to Layer 3 processing though the ARP request is not meant for any VMs in that compute node.

  • 1496609 For a control node to participate in high availability properly, all the control nodes must have a unique priority. When adding a new control node to an already provisioned high availability enabled cluster, the uniqueness in the priority across the control node is not automatic.

    You need to adjust the values to ensure uniqueness as follows:

    1. Stop the keepalived process using the service keepalived stop command
    2. Edit the /etc/keepalived/keepalived.conf file in all the control nodes and modify the priority under the vrrp_instance INTERNAL* and vrrp_instance EXTERNAL* configuration section, so that all the control nodes have unique values.
    3. Start the keepalived process using the service keepalived start command.
  • 1495697 When you add a new control node using the fab install_new_contrail command to a cluster that is already provisioned, there is a possibility that the command might fail due to a timing issue. Even though this command reports failure, it actually does everything as expected. You can proceed using the fab join_cluster command as the next step for adding a new control node.
  • 1404846 In Juno, VPC VM launch is failing VPC API is not supported with Juno. Its planned to be supported in subsequent release.
  • 1465744 Contrail/MX interop when a VM is using SNAT to reach a bare metal server floating IP address. This happens only in cases where a SNAT instance and destination Floating IP address are on the same compute node.
  • 1466777 There is a need to improve api-server and schema initialization times in a scaled setup. On highly scaled setups it takes up to 40 minutes for an API server and schema transformer to converge.
  • 1466731 A QFX Series switch does not handle transient duplicate VxLAN IDs for two different VNs. If a VN is deleted and added quickly the TOR switch may go in to a bad state.
  • 1468685 Centos6.5 icehouse - single node setup config processes are killed after a node reboot. A single node Centos installation runs into an API server exception.
  • 1484600 When a device is moved from one QFX Series switch to another Series switch, the MAC address is not learned on the switch for a period up to 12 minutes.
  • 1486387 If you configure compute and config services in the same node, you must use the fab setup_nova_aggregate command after the node is rebooted. If the command is not used, setup_nova_aggregate will never get executed.
  • 1493861 When clearing the setup used for inter-VN communication, the compute node might crash.
  • 1414850 Interfaces created for logical routers and other constructs that are not on vRouters, do not get accounted for in the dashboard.
  • 1403348 If you attach and then detach a security group, the transparent firewall service interface does not have an internal security group.
  • 1447401 On multiple VMs in a Docker cluster, they invariably end up on one compute only.
  • 1454813 Setup of a vCenter fails if the same dv_port or dv_switch name is part of multiple data centers.
  • 1455944 When creating nova instances in Docker containers, the user-data script is not executed.
  • 1457854 If you try to create an analyzer VM with contrail_flavor_small configured, the VM is not created but multiple instances are respawned and all are in an error state.
  • 1458794 DNS configuration in Docker container is wrong. A Docker instance does not learn the DNS address provided by the vVrouter.
  • 1460241 If you create twelve virtual routers attached to a single logical router and then clear the router, Neutron experiences an error.
  • 1461791 When servers in a cluster are reimaged with an ESX ISO image, only one server is successfully reimaged, all other servers in that cluster will be re-imaging in a loop.
  • 1463622 If you create multiple compute nodes and multiple virtual machines, return traffic from server to client converges on a single label. Eventually, all the flows converge on one VM on each compute node.
  • 1463786 If you create thousands of logical interfaces and thousands of virtual machine interfaces, deleting all the interfaces using the Web user interface might result in the Too many pending updates to RabbitMQ: 4096 error.
  • 1465372 If a bare metal server and a SNAT instance are attached to a public network and a packet is sent from the network namespace (netns) instance to the bare metal server, it gets Layer 3 lookups rather than a bridge table lookup.
  • 1468420 If you create thousands of virtual machine interfaces and logical interfaces with a thousand virtual networks, and then push the configuration using the device manager, the configuration might get repeatedly added and deleted on the MX Series router.
  • 1468474 TOR Agent Switchover: BUM/ARP traffic loss. Currently a control node does not implement the graceful restart feature, so MAC routes are immediately withdrawn on the TOR agent during switchover leading to traffic loss.
  • 1468886 Sometimes it takes more than half an hour for cmon to bring up mysql during node failure scenarios.
  • 1469296 When an MX Series router is providing NAT service for a bare metal server using floating IP addresses and the bare metal server belongs to overlapping subnets, their respective NAT configurations will collide in the NAT pool section of the config and get rejected.
  • 1469312 When HAProxy is stopped on a virtual IP node, one out of three glance requests fail.
  • 1480050 If you assign the same FIP address to two virtual machines, only the VM with an active VRRP address should get the FIP traffic.
  • 1489610 If two DNS servers are configured and one is down, the DNS request should only be sent to the server that is up.
  • 1492979 Broadcast routes are always programmed with the EVPN as the next hop. So even if there is no MX Series router to flood the traffic, it is still programmed in the composite next hop.

    The Vrouter replicates the traffic for the EVPN next hop and eventually the traffic is discarded. This causes the drop statistics count to increase.

  • 1469341 The Vcenter setup does not use the svc-monitor. The contrail-svc-monitor status needs to be removed from the contrail-status command output.
  • 1493687 Fragment packets with partial TCP headers get dropped but the flow still gets created and the next fragment gets forwarded to the receiver.

    When a packet fragment has a full TCP header and the next fragments offset is 1, then the Vrouter forwards this fragment.

    When a fragment packet head is received after 3 or more fragments, it sometime leads to fragment loss.

  • 1485754 When a virtual network is extended to a physical router, the Device Manager allocates an IP address for the IRB interface. If the virtual network to physical router association is broken, the Device Manager tries to free the allocated IP address. This call fails. As a result, the IP address that was previously allocated, is no longer available in the free pool.

Modified: 2015-12-11