Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Resolved Issues

This section provides information about issues that we resolved between releases 22.4 and 22.4.1.

Resolved Issues in Juniper Cloud-Native Router Release 22.4.1

  • JCNR-3416: CRPD logs are getting generated in /var/log/ and there is no option provided in the JCNR helm charts to change the default log path—cRPD logs were being generated in the /var/log/ folder and there was no option provided in the Juniper Cloud-Native Router helm charts to change the default log path. This issue is now resolved. A new log_path key is provided in both values.yaml and values_L3.yaml files.

  • JCNR-3436: corefiles not saved on /var/crash on host—The core files were not being saved in the /var/crash folder by default. This issue is now resolved. A new coreFilePath key is provided in both values.yaml and values_L3.yaml files.

  • JCNR-3162: Inconsistencies observed with pods using JCNR kernel interfaces after the node (server) was rebooted—After the node (server) was rebooted, if the pods using JCNR kernel interfaces attempted to transition to the Running state while the cRPD container was initializing, those pods were found to land in an error state. This issue is now resolved.

  • JCNR-3269: JCNR Interface is created without Vlan-ID in vrouter-master POD—While testing pod deletion, it was found that the JCNR interface was created without proper VLAN ID, resulting in packets being dropped at the interface level. This issue is now resolved.

  • JCNR-3373: JCNR is dropping multicast packets (DHCPv6 solicit request) coming from the radio unit—During reboot testing, it was observed that Juniper Cloud-Native Router was dropping DHCPv6 solicit requests coming from the radio unit. This issue is now resolved.

  • JCNR-3420: post restarting docker services, DU pods are not able to reach the gateway—After the docker restart it was noticed that all the pods restart and recover. However, after recovery, the GW was not reachable from inside the odu pods. This issue is now resolved.