Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

Troubleshoot Paragon Automation Installation

 

This topic provides a general guide to troubleshooting some typical problems you may encounter during and after installation.

Configuration File Merge Conflicts

The init script command creates the template configuration files. If you are updating an existing installation using the same config-dir, the template files created by the script are merged with the existing configuration files. Sometimes, this creates a merge conflict that needs to be resolved. The script prompts you on how to resolve the conflict. Select one of the following options:

  • C (default)—This option enables you to retain the existing configuration file and discard the new template file.

  • n—This option enables you to discard the existing configuration file and reinitialize back to the template file.

  • m—This option enables you to merge the files manually. Conflicting sections are marked with lines starting with “<<<<<<<<“, “||||||||”, “========“, and “>>>>>>>>”. You must edit the file and remove the merge markers before continuing.

  • d—You can view the differences between the files before deciding on how to resolve the conflict.

Common Backup and Restore Issues

In a scenario when you destroy an existing cluster and redeploy a software image on the same cluster nodes, if you try to restore a configuration from a previously backed up configuration folder, the restore operation might fail. Restore fails because the mount path for the backed up configuration is now changed. When you destroy an existing cluster, the persistent volume is deleted. When you redeploy the new image, the persistent volume gets recreated in one of the cluster nodes wherever space is available, but not necessarily in the same node as it was present in previously. Hence, the restore operation fails.

As a workaround:

  1. Determine the mount path of the new persistent volume.

  2. Copy the contents of the previous persistent volume's mount path to the new path.

  3. Retry the restore operation.

Viewing Installation Log Files

If the deploy script command fails, check the installation log files in the config-dir directrory. By default, the config-dir stores six zipped log files. The current log is saved as log and the previous logs are saved as log.1 through log.5 files. Every time you execute the script, the current log is saved and the oldest one is discarded.

Error messages are typically found at the end of a log file. View the error message and fix the configuration.

Viewing Log Files in Kibana

System logs are stored in ElasticSearch and accessed through the Kibana application. To view logs in Kibana:

  1. Open a browser and use the VIP of the ingress controller, https://<vip-of-ingress-controller-or-hostname-of-main-web-application>/kibana, in the URL field to log in to the Kibana app.
  2. If you are logging in for the first time, create an index pattern. Navigate to Management > Index Pattern.
  3. Enter logstash-* in the index pattern field. Click > Next Step.
    Figure 1: Kibana - Define Index Pattern
    Kibana - Define Index Pattern
  4. Select @timestamp in the Time Filter field name list and click Create index pattern to create an index pattern.
    Figure 2: Kibana - Configure Settings
    Kibana - Configure Settings
  5. You can use Discover to browse logs and add and remove filters as per your requirement.

Troubleshooting with Kubectl

The main interface in the Kubernetes cluster is kubectl which is installed on a primary node. You can log in to the primary node and use kubectl to access the Kubernetes API, view node details, and perform basic troubleshooting actions. The admin.conf file is copied to the config-dir on the control host as part of the installation.

You can also access the Kubernetes API from any other node that has access to the cluster. To use a node other than the primary node, you must ensure that you copy the admin.conf file and set the kubeconfig environment variable, or you can use the export KUBECONFIG=config-dir/admin.conf command.

Use the following kubectl commands to troubleshoot and/or view installation details.

Display node status

Use the kubectl get no command to check the status of the cluster nodes. The status of the nodes must be ready and the roles should be control plane or none. For example:

root@primary-node:~# kubectl get no

If a node is not ready, check if the kubelet process is running. You can also use the node syslog to investigate.

Display pod status

Use the kubectl get po –n namespace | -A command to check the status of a pod. You can specify an individual namespace (healthbot, northstar, common, and so on) or use -A to check for all namespaces. For example:

root@primary-node:~# kubectl get po -n northstar

The status of healthy pods must be displayed as running or completed and the number of ready containers should match the total. If the status of a pod is not running or the number of containers do not match, use the kubectl describe po command to further troubleshoot the issue.

Display detailed information about a pod

Use the kubectl describe po -n namespace pod-name command to give you detailed information of a specific pod. For example:

root@primary:~# kubectl describe po -n northstar bmp-854f8d4b58-4hwx4

Display the logs from a container in a pod

Use the kubectl logs -n namespace pod-name [-c container-name] command to view the logs of a particular pod. If a pod has multiple containers you must specify the container from which you want to view the logs. For example:

root@primary:~# kubectl logs -n common atom-db-0 | tail -3

Execute a command on a container in a pod

Use the kubectl exec –ti –n namespace pod-name [-c container-name] -- command-line command to execute commands inside a pod. For example:

root@primary-node:~# kubectl exec -ti -n common atom-db-0 -- bash

Here, you get a bash shell into the Postgres database server. You can access the bash shell inside the container and execute commands to connect to the database itself. Note that not all containers provide a bash shell. Some of them provide only SSH and some do not have any shells.

Display services

Use the kubectl get svc namespace | -A command to display the cluster services. You can specify an individual namespace (healthbot, northstar, common, and so on) or use -A to check for all namespaces. For example:

root@primary-node:~# kubectl get svc -A --sort-by spec.type

In this example, the services are sorted by type and only services of type Loadbalancer are displayed. You can see the services that are provided by the cluster and the external IP addresses selected by the load balancer to access those services.

Frequently used kubectl commands

Some frequently used kubectl commands are:

  • List replication controllers:

    # kubectl get –n namespace deploy
    # kubectl get –n namespace statefulset
  • Restart component:

    kubectl rollout restart –n namespace deploy deployment-name
  • Edit a Kubernetes resource. You can edit a deployment or any Kuberbetes API object and these changes are saved to the cluster. However, if you reinstall the cluster, these changes are not preserved.

    # kubectl edit –ti –n namespace deploy deployment-name