Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 

Resolved Issues

 

This section lists the issues fixed in this release.

  • The /etc/passwd file is created in the process of the first commit when a pristine jinstall image is used to boot for the first time. If you configure event-options, the system will try to read the configuration from the available event scripts, which requires privileges obtained from the /etc/passwd file. This can cause a circular dependency because the commit will not pass if the configuration includes event-options the first time a pristine image boots up, which is the case of an upgrade performed with the virsh create command. PR1220671

  • In a vMX VM with WindRiver Linux 6 RCPL 13, defect LIN6-10388 might be encountered that can result in the disruption of traffic. The issuis has been fixed in the later RCPL versions. PR1351915

  • The default NIC adapter type changed from E1000 to VMXNET3. As a result, the default setting on the OVA file for vFPC was not set to the correct NIC driver [VMXNET3, and the vFPC was loaded with the E1000 driver. PR1365337

  • While deploying a vMX instance without the virtual control plane (VCP) VM, the script will attempt to allocate vCPUs for the VCP. When there is not enough cores on the host, the script will fail. PR1365921

  • In MX150, an upgrade might fail as a result of the the installation of the rpm nfx-2-routing-data-plane-1.0-0.x86_64 failing. This failure occurs because there is not enough space in /filesystem on the host to install the rpm. PR1366324

  • Due to an issue with the multicast function on 10-Gigabit XE interfaces, Virtual Router Redundancy Protocol (VRRP) might not work properly If the network configuration includes two vMX VMs running VRRP on a LAN. In this case, both vMX VMs will appear as the master (active). PR1371838

  • The vMX might experience issues with QOS performance in an interface that has a low number of learning interfaces (IFLs). To address this vMX QOS performance issue, a new CLI option maximum-l3-nodes has been added to hierarchical-scheduler to allow you to configure the maximum level 3 nodes for a port. For core interfaces, this number is expected to remain small, and can enhance the scheduler throughput as less scanning cycles are used. See New and Changed Features for details on class of service support for configuring a maximum number of level 3 scheduler nodes. PR1373999