Download This Guide
Known Issues
This section lists known issues for the Juniper Networks Cloud CPE Solution Release 3.1.1.
- When
a session is transferring more than 2 GB of data, the throughput incorrectly
reported as Terabits per second (Tbps) in the CSO GUI.
Workaround: Transfer less than 2 GB over a single session to ensure that the throughput is reported correctly. [CXU-11556]
- On an NFX Series device, application tracking is
enabled for department security zones only on pushing an SD-WAN APBR
policy. When there is only a firewall policy deployed without SD-WAN,
no Application Visibility is displayed for the NFX Series device.
Workaround: Deploy an SD-WAN APBR policy to enable application visibility on the NFX Series device. [CXU-12154]
- If the oam-and-data interface goes down, the IBGP
session is lost and traffic stops flowing. This causes communication
with CSO to be lost and Syslogs are not sent even though there are
other WAN interfaces up and running.
Workaround: None. [CXU-12346]
- By
default, CSO uses Heat V2 APIs to bring up network services on Contrail.
However, due to a bug in CSO, the policy configured does not exchange
routes and hence traffic does not flow through the service chain.
Workaround: Use Heat V1 API for passing traffic. Contact the CSO Customer Support Center for help in configuring CSO to use Heat V1 APIs. [CXU-12863]
- For a site, when DHCP is configured
on the WAN interface and the LAN segment, the device activation fails.
Workaround: None. [CXU-13432]
- If you modify the default configuration of an
SRX 340 device to get the IP address by using DHCP, the device activation
fails.
Workaround: This issue occurs because of DHCP server bindings existing on the device. Clear the DHCP bindings before loading the configuration. [CXU-13446]
- During the failover of a link between an NFX Series
spoke device and a vSRX hub device, the BGP session goes down even
though the virtual route reflector (VRR) is reachable.
Workaround: None. [CXU-13517]
- On the Site Management page, the overlay links
information for the cloud hub are incorrect.
Workaround: The correct overlay information is displayed in the WAN tab of the cloud hub. [CXU-13579]
- If you configure a static SD-WAN policy and a
link goes down, it may take approximately three minutes for the gr-0/0/0
interface to be removed from the MPLS/Internet routing table.
Workaround: None. [CXU-13528]
- If one of the central infrastructure virtual machines
without ArangoDB running on them (central-infravm2 and central-infravm3)
goes down, then you cannot log in to the CSO GUI. This is because
automatic failover or restart is not supported in Release 3.1.1.
Workaround: Restart the virtual machines that are down. After the virtual machines come up, restart the IAMSVC-NOAUTH docker. You can now login to the CSO GUI. [CXU-13541]
- If you create an SD-WAN and firewall policy with
the source as a department and the department is not associated with
a site or a LAN segment, the job created to apply the SD-WAN and firewall
policies after ZTP fails.
Workaround: Associate the department with a site or a LAN segment before performing ZTP. [CXU-13542]
- For sites with device-initiated connections, by
default, all site traffic is source NATted at the hub. You cannot
apply a different source NAT rule to the hub because the default rule
overrides any user-configured source NAT rule.
Workaround: None. [CXU-13558]
- If you have a tenant with more than one site and
deploy a firewall policy to a single site, the policy is deployed
only to that site. However, jobs are created to push a dummy firewall
policy to other sites, which causes a performance issue on setups
with a large number of devices.
Workaround: None. [CXU-13562]
- If you configure DHCP on an NFX Series or an SRX
Series spoke device, in some cases, the spoke might fail to establish
a connection to CSO and may fail to send Syslog messages to the Contrail
Analytics node.
Workaround: None. [CXU-13567]
- BGP routes are withdrawn after 10-15 mins after
a BGP session goes down, even though the hold-timer is
set to 65,535. When this takes place, traffic flow is impacted.
Workaround:
- Do not bring down the BGP session as part of the SLA violation tests.
- Do not induce packet loss over the OAM link for testing SLA violation tests because doing so will impact BGP sessions.
[PR 1312702]
- On SRX 3xx Series devices, ZTP fails when you commit
the stage-1 configuration on the device.
Workaround: Manually remove the DHCP server configuration from the default SRX Series configuration, and then clear the DHCP server binding before triggering the ZTP workflow. [CXU-13446]
- On SRX 3xx Series devices, phone home activation
might fail due to a conflict with default configuration on the device.
Workaround: Copy the stage 1 configuration manually to the device and then trigger the phone home activation workflow. [PR 1312703]
- Sometimes the iBGP session with the VRR is broken
temporarily and comes back up automatically. During that period there
may be issues related link switch.
Workaround: None. [PR 1312863]
- When you onboard an NFX series device using the
default configuration, in some cases there is a connectivity issue
between Juniper Device Manager (JDM) and Junos Control Plane (JCP).
Workaround: Execute the workaround provided by Juniper Networks’ Support team. [CXU-12396]
- In some cases, one or more security management
microservices take more than 20 minutes to come up and are stuck in
the same state.
Workaround: Do the following:
Note: For the secmgt-appvisibility, secmgt-ecm, secmgt-seci, and secmgt-sm microservices execute all the steps below; for the secmgt-jingest microservices, execute the steps from step 6.
- Log in to the central infrastructure virtual machine (VM) as root.
- Start the ZooKeeper CLI by executing the /usr/share/zookeeper/bin/zkCli.sh command.
- For each security management microservice that does not come up, execute the delete /secmgt/key-name command, where key-name is the name of the key for the corresponding microservice. For example, for the secmgt-sm microservice, the key name is sm_initialized.
- Exit the ZooKeeper CLI by executing the quit command.
- Log out of the central infrastructure VM.
- Log in to the central microservice VM as root.
- At the shell prompt, for each microservice that does not come up, run the kubectl get pods | grep microservice-name command to find out the name of the pod for the microservice, where microservice-name is the name of microservice. For example, the app visibility microservice is called secmgt-appvisibility.
- Restart each microservices pod by executing the kubectl delete pod pod-name command, where pod-name is the name of the microservices pod; for example, csp.secmgt-appvisibility-2435668850-kc7l2.
- Wait until all pods are in the 1/1 running state.
- Log out of the central microservice VM.
[CXU-13726]
- If you deploy a NAT policy with one or more rules
and then delete the policy without first deleting the rules, the configuration
on the device is not cleared.
Workaround: To delete a NAT policy, first delete all the rules associated with the policy and deploy the policy. After the deployment is successful, delete the NAT policy by using the CSO GUI. [CXU-13879]
- Reports are not generated in HA deployment scenarios. [CXU-14039]
- If you try to activate an SRX300 CPE
device using the redirect server, the phone home activation does not
start.
Workaround: Do one of the following:
- Initiate the activation using the CSO GUI
- Use the phone home activation without using the redirect
server as follows:
- After the factory default configuration is applied, ensure that the device can reach the regional Virtual IP (VIP) address, which is configured on the regional load balancer VMs.
- Copy the certificate file
/etc/pki/tls/certs/ssl_cert.crt
from the regional load balancer VM to the/root/
folder on the device. - Specify the following configuration on the device:
set system root-authentication
set system phone-home ca-certificate-file /root/ssl_cert.crt
set system phone-home server https://regional-vip
, where regional-vip is the regional VIP address.
[CXU-14162]
- Link switching does not occur even though the throughput
threshold configured in the SLA profile is crossed because incorrect
interface stats are reported for the gre interface.
Workaround: None. [CXU-14127]
- The reverse path taken by traffic on the hub is
different from the forward path.
Workaround: None. [CXU-14330]
- The device profile srx-deployment-option-1 assigns OAM
traffic to the fxp0 interface, which is not available on the SRX Series
Services Gateway.
Workaround: Edit the stage 1 configuration for the CPE device in Customer Portal to use an interface other than fxp0 for OAM traffic. [CXU-3779]
- The traffic rate value does not change when you monitor
a service on an SRX Series Services Gateway in Administration Portal.
Workaround: None
[CXU-3822]
- The NFX250 device does not receive communications to block
unicast reverse path forwarding (uRPF) because two interfaces on the
NFX250 device communicate separately with one interface on the regional
microservices server.
Workaround: Disable the uRPF check in JDM on all interfaces for each NFX Series device.
[CXU-3854]
- You cannot edit the settings for a customer in Administration
Portal.
Workaround: Use Administration Portal to import a JSON file that contains the correct data for the customer. [CXU-4538]
- You can only view one network service at a time on a CPE
device in Customer Portal.
[CXU-4551]
- During activation, the NFX250 device reboots and requests
that you enter the activation code twice, because there is no default
value for the HugesPages count in the Linux Kernel on the device.
Workaround: Before you activate the NFX250 device, specify the HugePages count on the device and reboot it.
[CXU-5601, PR1254219]
- After you install CSO, the username and password that
CSO uses to access Contrail Analytics might not match the corresponding
username and password in Contrail Analytics.
Workaround: Complete the following actions:
- Log in to the CSO central infrastructure VM as root.
- View the username that CSO uses to access Contrail Analytics.
root@host:~/# etcdctl get /csp/infra/contrailanalytics/queryuser
<username>
<username> is the actual value that the query returns.
- View the password that CSO uses to access Contrail Analytics.
root@host:~/# etcdctl get /csp/infra/contrailanalytics/querypassword
<password>
<password> is the actual value that the query returns.
- Log in to the CSO regional infrastructure VM as root.
- Repeat Steps 4 and 5 to verify the username and password
on the regional host.
If the username and password on both the central and regional infrastructure VMs match the values configured in Contrail Analytics, you do not need to take further action. The default username configured in Contrail Analytics is admin and the default password is contrail123.
If the username and password on the central or regional infrastructure VMs do not match the values in Contrail Analytics, update them as follows:
- Log in to the appropriate CSO infrastructure VM as root.
- Update the username and password with the values configured
in Contrail Analytics.
root@host:~/# etcdctl set /csp/infra/contrailanalytics/queryuser contrail-analytics-username
root@host:~/# etcdtl set /csp/infra/contrailanalytics/querypassword <contrail-analytics-password>
contrail-analytics-username is the actual username, and <contrail-analytics-password> is the actual password.
[CXU-5873]
- The configuration for a CPE device might not be removed
when you deactivate the device in Administration Portal.
Workaround: To deactivate the CPE device, first delete the configuration from the CPE device with Customer Portal, and then deactivate the device with Administration Portal. [CXU-6059]
- You cannot edit the Deployment Type (vCPE-Only or uCPE-Only)
in a request that you create with Network Service Designer.
Workaround: Create a new request. [CXU-6474]
- Performance metrics for the NFX Series device are collected
through the HTTP interface.
Workaround: None. [CXU-8710]
- In the detailed view of a site on the Sites page (Sites > Site Management), the Overlay Links tab
displays only GRE links and not GRE over IPsec links.
Workaround: None. [CXU-10170]
- In some cases, when multiple system log messages (syslogs)
or queries are being processed, the Contrail Analytics Node (CAN)
crashes (Docker restarts).
Workaround: Do the following:
- Log in to the CAN as root.
- Restart the analytics and controller Docker containers.
- Log in to the analytics Docker container by using the docker exec -it analytics-docker-id bash command, where analytics-docker-id is the ID of the analytics Docker container.
- Verify that the following entries are present in the
/etc/contrail/contrail-collector.conf
file:[STRUCTURED_SYSLOG_COLLECTOR]
# TCP & UDP port to listen on for receiving structured syslog messages
port=3514
- Verify that the following entries are available in the
/etc/contrail/contrail-analytics-api.conf
file:aaa_mode=no-auth
- If either of the entries is not present, restart the services
by executing the following commands:
service contrail-collector restart
service contrail-analytics-api restart
- Log in to the controller Docker container by using the docker exec -it controller_docker_id bash command, where controller_docker_id is the ID of the controller Docker container.
- Verify that the following entries are available in the
/usr/lib/python2.7/dist-packages/vnc_cfg_api_server/vnc_cfg_api_server.py
file:(under) def __init__(self, args_str=None):
self.aaa_mode = 'no-auth'
[CXU-10838]
- On the NAT Rules page, if you try to search or use the
column filters for departments named Internet or Corporate
Network, the search does not work.
Workaround: None. [CXU-10406]
- Traffic not classified by SD-WAN policies follows the
default routing behavior.
Workaround: Configure SD-WAN policies for all the traffic originating from the LAN. [CXU-10716]
- The Activate Device link is enabled even though
activation is in progress.
Workaround: After clicking the Activate Device link, wait for the activation process to complete; the Activate Device link is disabled after activation completes. [CXU-10760]
- In some cases, an operation might fail with the Authentication
required error message because of an expired token.
Workaround: Retry the operation. [CXU-10809]
- If one of the CSO infrastructure nodes (virtual machines)
fails (shuts down or restarts), the topology service repeatedly sends
out the error message “Failed to consume message
from queue: 'NoneType' object has no attribute 'itervalues'” to the logs.
Workaround: After the infrastructure node fails over to the backup node, wait for ten minutes before using any workflows. [CXU-11004]
- If a user who belongs to more than one tenant is deleted,
the user is deleted from all tenants.
Workaround: None. [CXU-11201]
- On the Site-Name page (Sites
> Site-Name), when you click the Device link (in the Connectivity & Devices box
on the Overview tab), you are navigated to the Devices
page (Resources > Devices) where all available devices
are displayed instead of only the device that you selected.
Workaround: None. [CXU-11339]
- If you modify a site group that is not used in any policy,
a GUI notification incorrectly indicates that policies need to be
deployed.
Workaround: Deploy the indicated policies on the device. However, because there are no changes to be deployed, the configuration is not deployed. [CXU-11395]
- If an infrastructure node (virtual machine) goes down,
the backup node takes over the tasks handled by the primary node.
However, after the node that was down recovers, it does not join the
cluster and is stuck in slave mode.
Workaround: Ensure that both nodes are up before you perform the following tasks:
- Log in to the infrastructure node as root.
- Open a shell prompt and access the redis CLI by executing the redis-cli -h node-IP-p 6379 -c command, where node-IP is the IP address of the infrastructure node that went down.
- At the redis CLI prompt, execute the CLUSTER FAILOVER command.
- Execute the INFO command.
- In the output, under replication, ensure that the node is displayed as Master.
- Exit the redis CLI by executing the QUIT command.
- Log out of the infrastructure node.
[CXU-11711]
- If you create a LAN segment with the name LAN0, LAN1,
or LAN2, the deployment of the LAN segment fails.
Workaround: Do not use the names LAN0, LAN1, or LAN2 when you create a LAN segment. [CXU-11743]
- In the device template, if the ZTP_ENABLED and ACTIVATION_CODE_ENABLED flags are set to true, you cannot proceed
with device activation.
Workaround: Set the ZTP_ENABLED flag to true and the ACTIVATION_CODE_ENABLED flag to false before proceeding with device activation. [CXU-11794]
- When you remove a cloud hub from a tenant, the corresponding
router is removed from the All Tenants scope.
Workaround: None. [CXU-11796]
- When you upgrade the gateway router (GWR) by using the
CSO GUI, after the upgrade completes and the gateway router reboots,
the gateway router configuration reverts to the base configuration
and loses the IPsec configuration added during Zero Touch Provisioning
(ZTP).
Workaround: Before you upgrade the gateway router by using the CSO GUI, ensure that you do the following:
- Log in to the Juniper Device Manager (JDM) CLI of the NFX Series device.
- Execute the virsh list command to obtain the name of the gateway router (GWR_NAME).
- Execute the request virtual-network-functions GWR_NAME restart command, where GWR_NAME is the name of the gateway router obtained in the preceding step.
- Wait a few minutes for the gateway router to come back up.
- Log out of the JDM CLI.
- Proceed with the upgrade of the gateway router by using the CSO GUI.
[CXU-11823]
- On rare occasions, the Logspout microservice causes the
docker daemon to hog the CPU on the microservices virtual machine.
Workaround: Restart the Logspout microservice by doing the following:
- Log in to the central microservices virtual machine as root.
- At the shell prompt, run the kubectl get pod command to find out the name of the Logspout pod.
- Restart the pod by executing the kubectl delete pod pod-name command, where pod-name is the name of the Logspout pod.
[CXU-11863]
- If the central infrastructure node that created the SAML
2.0 session goes down and you log out of the CSO GUI, the SAML 2.0
session log out fails.
Workaround:
- Reload the CSO login page.
- Enter the username and press the Tab button on your keyboard
or click the mouse outside the username field.
You are automatically logged in to the CSO GUI.
- Click the Logout link in the banner to log
out.
You are logged out of the CSO GUI and the SAML 2.0 session.
[CXU-11867]
- When the deployment of a LAN segment on a device fails,
the device status is changed from PROVISIONED to PROVISION_FAILED. However, when you redeploy the LAN segment and the deployment is
successful the device status is not changed to PROVISIONED. Therefore, when you attempt to deploy an SD-WAN or a firewall policy
on the device, the deployment fails with the error message "[get_policy_info_list:
method execution failed]Site/ Device information not found".
Workaround: None. [CXU-11874]
- If you trigger a device activation on the Activate Device
page, the status of the activation is displayed based on the progress
of the activation steps completed. However, the device activation
process takes between 20 and 30 minutes and if you click OK to close the Activate Device page, you cannot go back to the Activate
Device page to find out the status.
Workaround:
- Wait for 20 to 30 minutes after you trigger the activation.
- On the Sites page, hover over the device icon to see the
status:
- If the device status is PROVISIONED, it means that the activation was successful.
- If the device status is EXPECTED or PROVISION_FAILED, then the device activation has failed.
Contact your service provider for further assistance.
[CXU-11878]
- When a tunnel goes down, the event generated displays
different information for the NFX Series and SRX Series devices:
- When the GRE over IPsec tunnel goes down:
- The event generated for the vSRX device (running on the NFX Series device) has the description ['Tunnel-id ID is inactive'].
- The event generated for the SRX Series device has the description GRE over IPSEC is Down.
- When the GRE-only tunnel goes down:
- The event generated for the vSRX device (running on the NFX Series device) has the description tunnel-oam-down.
- The event generated for the SRX device has the description GRE tunnel down.
Workaround: None. [CXU-11895]
- When the GRE over IPsec tunnel goes down:
- If you try to delete one or more LAN segments, the confirmation
dialog box does not display the list of LAN segments selected for
deletion. However, when you click OK to confirm the deletion,
the LAN segments are deleted successfully.
Workaround: None. [CXU-11896]
- If the role of an existing user is changed from MSP Operator
to MSP Administrator and that user tries to switch the tenant by using
the scope switcher in the banner, the tenant switching fails.
Workaround: Delete the existing user and add an MSP user with the MSP Administrator role. The new user will be able to perform the tenant switch. [CXU-11898]
- In Cloud CPE Solution Release 3.1.1, editing a site is not
supported. When you try to edit a site, the message "unable to
retrieve the router info" is displayed.
Workaround: Delete the site and add the site again with the modified settings. [CXU-11912]
- If you edit an existing LAN segment that was previously
added during site creation, the Department field is changed
to Default.
Workaround: When you edit a LAN segment, ensure that you select the correct department before saving your changes. [CXU-11914]
- If you apply an APBR policy on a vSRX device or an SRX
Series device, in some cases, the APBR rule is not active on the device.
Workaround:
- Log in to the vSRX or SRX Series device in configuration mode.
- Deactivate the APBR policy by executing the delete apply-groups srx-gwr-apbr-policy-config command.
- Commit the configuration by executing the commit command.
- Activate the APBR policy by executing the set apply-groups srx-gwr-apbr-policy-config command.
- Commit the configuration by executing the commit command.
- Log out of the device.
[CXU-11920]
- In some cases, traffic fails to flow over an overlay tunnel.
Workaround: Reboot the vSRX or SRX Series device to ensure that the traffic flows normally. [CXU-11921]
- When you log in to CSO as a Tenant Administrator user,
the Configure Site workflow is not available.
Workaround: Log in as an MSP Administrator user and switch to the tenant for which you want to configure the site. The Configure Site workflow becomes available. [CXU-11922]
- ZTP activation of an SRX Series device by using the phone
home client (PHC) fails.
Workaround: If the activation fails with the error message Ztp-activation finished incomplete for ems-device and no other error messages are present in the activation logs, then the MSP Administrator (in the All Tenants scope) can retry the activation job by navigating to the Jobs page (Monitor > Jobs), selecting the failed job, and clicking the Retry Job button. [CXU-11926]
- On the Monitor > Overview page:
- The number of hub devices is reported as zero even though
a cloud hub exists.
Workaround: The expanded view displays the correct data.
- When you collapse and expand the map view, the number
of links reported is incorrect.
Workaround: Refresh the page to display the correct data.
[CXU-11931]
- The number of hub devices is reported as zero even though
a cloud hub exists.
- For GRE-over-IPsec overlays, in some cases, the event-options configuration fails to re-enable the gr-0/0/0 interface. As a result,
the traffic between the spoke and hub overlay stops flowing even though
the overlay is up.
Workaround: Do the following:
- Find out which links are affected
by checking the status of the real-time performance monitoring (RPM)
probe in the UI:
- On the Site-Name page (Site > Site Management > Site-Name), select the WAN tab.
- On the link between the spoke and the hub, expand the WAN links by clicking the number of WAN links displayed.
- For each WAN link, select the link and check the packet
loss value displayed on the right side.
The links for which the packet loss is 100% indicates that the links are down.
- Find out which gr-0/0/0 interfaces
are down on the spoke device:
- Log in to the spoke device as root.
- Execute the show interfaces gr-0/0/0.* terse command.
An example of the command and output follows:
user@host> show interfaces gr-0/0/0.* terse
Interface Admin Link Proto Local Remote gr-0/0/0.1 up up inet 192.0.2.1/31 mpls gr-0/0/0.2 down up inet 192.0.2.2/31 mpls gr-0/0/0.3 up up inet 192.0.2.3/31 mpls gr-0/0/0.4 up up inet 192.0.2.4/31 mpls gr-0/0/0.5 down up inet 192.0.2.5/31 mpls gr-0/0/0.6 up up inet 192.0.2.6/31 mpls
- For each disabled interface
on the spoke, execute the show configuration | display set |
match "disable" | match "gr-interface-name unit unit-number" command to find out whether any disabled
statements are present in the configuration, where gr-interface-name is the name of the interface and unit-number is the unit number that was obtained in Step 2.
An example of the command and output follows:
user@host>show configuration | display set | match "disable" | match "gr-0/0/0 unit 5"
set groups NFX-5-SPOKE_CPE1_WAN_2_GRE_IPSEC_0 interfaces gr-0/0/0 unit 5 disable
- Find out whether there is a problem by doing the following:
- Execute the show interfaces
st0* terse operational command to find out the status of the
IPsec tunnels and the IP addresses:
An example of the command and output follows:
user@host> show interfaces st0* terse
Interface Admin Link Proto Local Remote st0 up up st0.1 up down inet 192.0.2.7/31 st0.2 up up inet 192.0.2.8/31 # [IP address match] st0.3 up up inet 192.0.2.9/31
- Execute the show
interfaces gr-down-interface-name command,
where gr-down-interface-name is the name of the
interface that was down (obtained in Step 2).
Note: You must execute this command for all the interfaces that were down.
An example of the command and output follows:
user@host> show interfaces gr-0/0/0.5
Logical interface gr-0/0/0.5 (Index 139) (SNMP ifIndex 579) Flags: Down Down Point-To-Point SNMP-Traps 0x4000 IP-Header 192.0.2.10/31:192.0.2.8/31:47:df:64:0000000000000000 # [IP address match] Encapsulation: GRE-NULL Gre keepalives configured: Off, Gre keepalives adjacency state: down Input packets : 0 Output packets: 0 Security: Zone: trust Allowed host-inbound traffic : bootp bfd bgp dns dvmrp igmp ldp msdp nhrp ospf ospf3 pgm pim rip ripng router-discovery rsvp sap vrrp dhcp finger ftp tftp ident-reset http https ike netconf ping reverse-telnet reverse-ssh rlogin rpm rsh snmp snmp-trap ssh telnet traceroute xnm-clear-text xnm-ssl lsping ntp sip dhcpv6 r2cp webapi-clear-text webapi-ssl tcp-encap Protocol inet, MTU: 9168 Flags: Sendbcast-pkt-to-re Addresses, Flags: Dest-route-down Is-Preferred Is-Primary Destination: 192.0.2.11/31, Local: 192.0.2.5/31 Protocol mpls, MTU: 9156, Maximum labels: 3
- For each IP address obtained in Step a, search the output from Step b for a match. (In the sample outputs, the IP address of the st0.2 interface (192.0.2.8/31) matches the IP address in the IP-Header parameter.)
- If the st0 interface corresponding to the matched IP address is up and the corresponding gr-0/0/0 interface is down, then there is a problem with the configuration.
- Execute the show interfaces
st0* terse operational command to find out the status of the
IPsec tunnels and the IP addresses:
- Modify the configuration
on the spoke as follows:
- Log in to the spoke device as root.
- Delete the disabled statements found in Step 3 by executing the delete command. An example is provided below:
user@host# delete groups NFX-5-SPOKE_CPE1_WAN_2_GRE_IPSEC_0 interfaces gr-0/0/0 unit 5 disable
- Commit the configuration by executing the commit command.
- Repeat Step 2 through Step 5 for the hub device.
- Verify that the links are up by following the procedure in Step 1.
[CXU-11996]
- Find out which links are affected
by checking the status of the real-time performance monitoring (RPM)
probe in the UI:
- If an SLA profile is defined with only the throughput
metric specified, in some cases, the SLA profile is assigned to a
link that is down.
Workaround: In addition to the throughput metric, ensure that you specify at least one more metric (for example, packet loss or latency) for the SLA profile. [CXU-11997]
- You see an import error message when you use the license
tool, because the
tssmclient
package is missing from the license tool files.Workaround: Complete the following procedure:
- Copy the
tssmclient.tar.gz
file from http://www.juniper.net/support/downloads/?p=cso#sw to thecsoVersion/licenseutil/csoclients/
directory on the installer VM, where csoVersion is the name of the installer directory created when you extract the TAR file for the installation package. - Access the
csoVersion/licenseutil/csoclients/
directory and extract thetssmclient
directory from the TAR file.For example, if the installer directory is called
csoVersion
:root@host:~/# cd
csoVersion/licenseutil/csoclients/
root@host:~/csoVersion/licenseutil/csoclients# tar xvzf
tssmclient.tar.gz
This command replaces the
tssmclient
folder and its contents. - Run the license tool using the instructions in the Deployment Guide.
[CXU-12054]
- Copy the
- In a three-node setup, two nodes are clustered together,
but the third node is not part of the cluster. In addition, in some
cases, the RabbitMQ nodes are also not part of the cluster. This is
a rare scenario, which can occur just after the initial installation,
if a virtual machine reboots, or if a virtual machine is powered off
and then powered on.
Workaround: Do the following:
- Log in to the RabbitMQ dashboard for the central microservices VM (http://central-microservices-vip:15672) and the regional microservices VM (http://regional-microservices-vip:15672).
- Check the RabbitMQ overview in the dashboards to see if all the available infrastructure nodes are present in the cluster.
- If an infrastructure node is not present in the cluster,
do the following:
- Log in to the VM of that infrastructure node.
- Open a shell prompt and execute the following commands
sequentially:
service rabbitmq-server stop
rabbitmqctl stop_app command
rm -rf /var/lib/rabbitmq/mnesia/
service rabbitmq-server start
rabbitmqctl start_app
- In the RabbitMQ dashboards for the central and regional microservices VMs, confirm that all the available infrastructure nodes are present in the cluster.
[CXU-12107]
- Some variables in the CSO and NSC installer packages do
not have the correct values.
Workaround: After you extract the TAR file for the installation package, access the installer directory, which has the same name as the installer package, and execute the following sed command:
For example, if the name of the installer package is
csoVersion.tar.gz
:root@host:~/# cd csoVersion
root@host:~/csoVersion# sed -i s@gcr.io/google_containers@csp-installer:10000@g salt/file_root/kubeminion/files/manifests/kube-proxy.yaml;sed -i s@"timedatectl set-ntp"@ntpdate@g salt/file_root/ntp/init.sls;sed -i s@"add_admin_portal_documentation(server_dict\['ip'\]"@"#add_admin_portal_documentation(server_dict\['ip'\])"@g micro_services/core.py;
You can then use the files and tools in the installer directory to perform operations such as provisioning VMs, creating configuration files, and installing the solution.
[CXU-12113]
- When you try to install the
Distributed_Cloud_CPE_Network_Service_Controller_3.1
package, the load services data module fails with an import error in the publish_data_to_design_tools function.Workaround:
- Navigate to the
untar-dir/micro_services/
directory, where untar-dir is the directory where you unzipped the installation tar file. - Open the
load_services_data.py
file in an editor and comment out the publish_data_to_design_tools function in the file as follows:''' # publish data to design-tools publish_data_to_design_tools( token, regions['central']['vip_ip'], regions['central']['vip_port'], data_str, http_protocol ) '''
- Save the
load_services_data.py
file. - Run the load_service_data.sh script to continue with the installation.
[CXU-12137]
- Navigate to the
- When you deploy a firewall policy, the deployment fails
with the message Fail to invoke mapper to create snapshot with
reason null.
Workaround: Do the following:
- Log in to the central infrastructure virtual machine (VM) as root.
- Start the ZooKeeper CLI by executing the /usr/share/zookeeper/bin/zkCli.sh command.
- Execute the delete /secmgt/sm_initialized command.
- Exit the ZooKeeper CLI by executing the quit command.
- Log out of the central infrastructure VM.
- Log in to the central microservice VM as root.
- At the shell prompt, execute the kubectl get pods | grep secmgt-sm command to find out the name of the security management pod.
- Restart the pod by executing the kubectl delete pod pod-name command, where pod-name is the name of the security management pod.
- Wait until the security management pod is in the 1/1 running state.
- Log out of the central microservice VM.
- Re-deploy the firewall policy.
[CXU-12151]