Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Navigation  Back up to About Overview 

Known Behavior

This section lists known behavior, system maximums, and limitations in hardware and software in Juniper Networks Cloud CPE Solution Release 3.1.4.

  • If the Kubernetes minion node in the central or regional microservices VM goes down, the pods on the minion node are moved to the Kubernetes master node. When you bring the minion node back up, the pods do not automatically rebalance across the nodes.

    To rebalance the pods back to the Kubernetes minion node that was down, do the following:

    1. Check the status of the kube-proxy process on the minion node by executing the kubectl get pods --namespace=kube-system command.

      A sample output is shown below.

      root@host:~# kubectl get pods --namespace=kube-system
      NAME                                    READY     STATUS    RESTARTS   AGE
      etcd-empty-dir-cleanup-192.0.2.1    1/1       Running   1          1d
      kube-addon-manager-192.0.2.1        1/1       Running   1          1d
      kube-apiserver-192.0.2.1            1/1       Running   1          1d
      kube-controller-manager-192.0.2.1   1/1       Running   1          1d
      kube-dns-v11-lcs1x                      4/4       Running   4          1d
      kube-proxy-192.0.2.1                1/1       Running   0          1d
      kube-proxy-192.0.2.2                1/1       Unknown   0          1d
      kube-scheduler-192.0.2.1            1/1       Running   1          1d
      kubernetes-dashboard-1579006691-1fvmk   1/1       Running   1          1d
      
      
    2. If the status of the kube-proxy process on the Kubernetes minion node is Unknown, execute the kubectl delete pod kube-proxy-minion-IP-address--namespace=kube-system --grace-period=0 –force command, where minion-IP-address is the IP address of the minion node that was down.
    3. Verify that the status of the kube-proxy process is Running.
    4. Execute the command to rebalance the nodes:
      • If you are running a trial HA setup, execute the kubectl delete pods --all --grace-period=0 command on the Kubernetes master node.
      • If you are running a production HA setup, execute the kubectl delete pods --all --grace-period=0 command on the Kubernetes master node and the Kubernetes minion node that did not go down.
  • We recommend that you do not delete existing LAN segments from a site because this might impact firewall and SD-WAN policy deployments. [CXU-13683]
  • For a centralized deployment, use the following procedure to check that the JSM Heat resource is available in Contrail OpenStack on the Contrail Controller node.

    Note: This procedure must be performed on all the Contrail Controller nodes in your CSO installation.

    1. Log in to the Contrail Controller node as root.
    2. To check whether the JSM Heat resource is available, execute the heat resource-type-list | grep JSM command.

      If the search returns the text OS::JSM::Get Flavor, the file is available in Contrail OpenStack.

    3. If the file is missing, do the following:
      1. Use Secure Copy Protocol (SCP) to copy the jsm_contrail_3.pyc file to the following directory:
        • For Heat V1 APIs, the /usr/lib/python2.7/dist-packages/contrail_heat/resources directory on the Contrail Controller node.
        • For Heat V2 APIs, the /usr/lib/python2.7/dist-packages/vnc_api/gen/heat/resources directory on the Contrail Controller node.

        Note: The jsm_contrail_3.pyc file is located in the /root/Contrail_Service_Orchestration_3.1.4/deployments/central/file_root/contrail_openstack/ directory on the VM or server on which you installed CSO.

      2. Rename the file to jsm.pyc in the Heat resource directory to which you copied the file.
      3. Restart the Heat services by executing the service heat-api restart && service heat-api-cfn restart && service heat-engine restart command.
      4. After the services restart successfully, verify that the JSM Heat resource is available as explained in Step 2. If it is not available, repeat Step 3.
  • When a tenant object is created through Administration Portal or the API for a centralized deployment, Contrail OpenStack adds a default security group for the new tenant. This default security group denies inbound traffic and you must manually update the security group in Contrail OpenStack to allow ingress traffic from different networks. Otherwise, Contrail OpenStack might drop traffic.
  • Contrail Service Orchestration does not offer a single RPC to get the device identifier for a specific site. You can use multiple API calls or the license installation tool to obtain the device identifier for a specific site.
  • You can use Administration Portal to upload licenses to Contrail Service Orchestration; however, you cannot use Administration Portal to install licenses on physical or virtual devices that Contrail Service Orchestration manages. You must use the APIs or the license installation tool to install licenses on devices.
  • Contrail Service Orchestration uses RSA key based authentication when establishing an SSH connection to a managed CPE device. The authentication process requires that the device has a configured root password, and you can use Administration Portal to specify the root password in the device template.

    To specify a root password for the device:

    1. Log in to Administration Portal.
    2. Select Resources > Device Templates.
    3. Select the device template and click Edit.
    4. Specify the encrypted value for the root password in the ENC_ROOT_PASSWORD field.
    5. Click Save.
  • You can use the logs on an NFX250 device to review the status of the device’s activation.
  • In Cloud CPE Solution Release 3.1.4, intrusion prevention system (IPS) is not supported. Therefore, in the IPS report, the attack name from the IPS signatures is displayed as UNKNOWN.
  • In Cloud CPE Solution Release 3.1.4, the virtual machine (VM) on which the virtual route reflector (VRR) is installed supports only one management interface.
  • In Cloud CPE Solution Release 3.1.4, high availability for ArangoDB is not supported. Therefore, ensure that the central infrastructure VM, where ArangoDB is running, is not brought down or does not fail. If the VM is down, bring it up immediately for CSO to be operational.
  • In Cloud CPE Solution Release 3.1.4, when you try to deploy a LAN segment on an SRX Series spoke device, the CSO GUI allows you to select more than one port for a LAN segment. However, for SRX Series devices, only one port for a LAN segment can be deployed; multiple ports in a LAN segment can be deployed only on NFX Series devices.
  • On the SRX Series device, the traffic for any of the following combinations does not flow because of an inability to identify the reverse route for traffic terminating through MPLS:
    • Site to site
    • Site to department or vice versa
    • Department to department

    To enable traffic, you must add a firewall rule permitting traffic from the corresponding department's zone (where the traffic was supposed to terminate) to the Trust zone on the destination site. However, we do not recommend doing this because the rule can conflict with existing firewall intents.

  • In Cloud CPE Solution Release 3.1.4, SSL proxy is not supported.
  • An SD-WAN policy deployment is successful even if there is no matching WAN link meeting the SLA. This is expected behavior and is done so that when a WAN link matching the SLA becomes available, traffic is routed through that link.
  • Tenant Administrator users cannot delete sites.
  • On a site with an NFX Series device, if you deploy a LAN segment without the VLAN ID specified, CSO uses an internal VLAN ID meant for internal operations and this VLAN ID is displayed in the UI. There is no impact on the functionality.
  • When you activate a CPE device with WAN interfaces configured for DHCP:
    • Ensure that all the WAN interfaces configured for DHCP have the IP address allocated from the DHCP server.
    • When multiple WAN links are configured for DHCP, in some cases, all DHCP servers will advertise a default route to the CPE device, which can lead to traffic being routed through an undesired WAN interface, which could then stop the GRE and IPsec tunnels from being operational.

      To avoid this scenario, configure a static route through each WAN interface to reach the tunnel endpoint through the desired WAN interface.

  • When you create LAN segments, the LAN segment table does not display the DHCP settings even though the changes are saved successfully.
  • When you trigger the ZTP workflow on an NFX Series device, we recommend that you use the activation code for the device and initiate the activation by using the Activate Device link in the CSO GUI.
  • In the SLA Performance page (Monitor > Applications > SLA Performance, the scatter plot displays the SLA name as UNKNOWN for the applications for which the SLA is not violated.
  • On an NFX Series device, if you try to install the signature database before installing the application identification license, the signature database installation fails.

    Ensure that you first install the application identification license and then install the signature database.

    To install the application identification license:

    Note: Ensure that you have the license ready before you begin this procedure.

    1. SSH to the vSRX gateway router running on NFX Series device and log in as root.
    2. Access the Junos OS CLI and enter the operational mode.
    3. Execute the show system license command to view the existing license so that you can verify (in a subsequent step) that the license is added.

      A sample output is as follows:

      root@host> show system license
      License usage:
                                       Licenses     Licenses    Licenses    Expiry
        Feature name                       used    installed      needed
        Virtual Appliance                     1            1           0    55 days
        remote-access-ipsec-vpn-client        0            2           0    permanent
      
      Licenses installed:
        License identifier: XXXXXXXXXX
        License version: 4
        Software Serial Number: XXXXXXXX
        Customer ID: XXXXXXXXXXXXXXXX
        Features:
          Virtual Appliance - Virtual Appliance
            count-down, Original validity: 60 days
      
    4. Execute the request system license add terminal command.
    5. Copy the license, paste it into the terminal, and press Ctrl+D.

      If the license is added successfully, a confirmation message is displayed as shown in the following sample output:

      root@host> request system license add terminal
      [Type ^D at a new line to end input,
      enter blank line between each license key]
      add license complete (no errors)
    6. Execute the show system license command and compare the output with the one obtained in step 3 to verify that the license is added.

      A sample output is as follows:

      root@host> show system license
      License usage:
                                       Licenses     Licenses    Licenses    Expiry
        Feature name                       used    installed      needed
        Virtual Appliance                     1            1           0    55 days
        remote-access-ipsec-vpn-client        0            2           0    permanent
      
      Licenses installed:
        License identifier: XXXXXXXXXX
        License version: 4
        Software Serial Number: XXXXXXXX
        Customer ID: XXXXXXXXXXXXXXXX
        Features:
          Virtual Appliance - Virtual Appliance
            count-down, Original validity: 60 days
      
        License identifier: YYYYYYYYYYY
        License version: 4
        Software Serial Number: YYYYYYYYYYYYYY
        Customer ID: YYYYYYYYYYY
        Features:
          appid-sig        - APPID Signature
            date-based, 2016-04-05 00:00:00 UTC - 2017-04-06 00:00:00 UTC
          idp-sig          - IDP Signature
            date-based, 2016-04-05 00:00:00 UTC - 2017-04-06 00:00:00 UTC
      
    7. Exit the Junos OS CLI and log out of vSRX.

Modified: 2018-02-12