Help us improve your experience.

Let us know what you think.

Do you have time for a two-minute survey?

 
 

Troubleshoot Paragon Automation Installation

SUMMARY This topic provides a general guide to troubleshooting some typical problems you might encounter during and after installation.

Resolve Merge Conflicts of the Configuration File

The init script creates the template configuration files. If you update an existing installation using the same config-dir directory that was used for the installation, the template files that the init script creates are merged with the existing configuration files. Sometimes, this merging action creates a merge conflict that you must resolve. The script prompts you about how to resolve the conflict. When prompted, select one of the following options:

  • C—You can retain the existing configuration file and discard the new template file. This is the default option.

  • n—You can discard the existing configuration file and reinitialize the template file.

  • m—You can merge the files manually. Conflicting sections are marked with lines starting with “<<<<<<<<“, “||||||||”, “========“, and “>>>>>>>>”. You must edit the file and remove the merge markers before you proceed with the update.

  • d—You can view the differences between the files before you decide how to resolve the conflict.

Common Backup and Restore Issues

In a scenario when you destroy an existing cluster and redeploy a software image on the same cluster nodes, if you try to restore a configuration from a previously backed up configuration folder, the restore operation might fail. Restore fails because the mount path for the backed up configuration is now changed. When you destroy an existing cluster, the persistent volume is deleted. When you redeploy the new image, the persistent volume gets recreated in one of the cluster nodes wherever space is available, but not necessarily in the same node as it was present in previously. Hence, the restore operation fails.

As a workaround:

  1. Determine the mount path of the new persistent volume.

  2. Copy the contents of the previous persistent volume's mount path to the new path.

  3. Retry the restore operation.

View Installation Log Files

If the deploy script fails, you must check the installation log files in the config-dir directory. By default, the config-dir directory stores six zipped log files. The current log file is saved as log, and the previous log files are saved as log.1 through log.5 files. Every time you run the deploy script, the current log is saved, and the oldest one is discarded.

Error messages are typically found at the end of a log file. View the error message, and fix the configuration.

View Log Files in Kibana

System logs are stored in Elasticsearch, and can be accessed through the Kibana application. To view logs in Kibana:

  1. Open a browser, and enter the VIP of the ingress controller, https://vip-of-ingress-controller-or-hostname-of-main-web-application/kibana, in the URL field to log in to the Kibana application.
  2. If you are logging in for the first time, create an index pattern by navigating to to Management > Index Pattern.
  3. Enter logstash-* in the Index pattern field and then click > Next Step.
    Figure 1: Kibana - Define Index Pattern Kibana - Define Index Pattern
  4. Select @timestamp from the Time Filter field name list, and then click Create index pattern to create an index pattern.
    Figure 2: Kibana - Configure Settings Kibana - Configure Settings
  5. Use the Discover to browse the log files, and to add or remove filters as required.

Troubleshooting using the kubectl Interface

The main interface in the Kubernetes cluster is kubectl, which is installed on a primary node. You can log in to the primary node and use the kubectl interface to access the Kubernetes API, view node details, and perform basic troubleshooting actions. The admin.conf file is copied to the config-dir directory on the control host as part of the installation process.

You can also access the Kubernetes API from any other node that has access to the cluster. To use a node other than the primary node, you must copy the admin.conf file and set the kubeconfig environment variable. Another option is to use the export KUBECONFIG=config-dir/admin.conf command.

SUMMARY Use the following sections to troubleshoot and view installation details using the kubctl interface.

View node status

Use the kubectl get no command to view the status of the cluster nodes. The status of the nodes must be Ready, and the roles should be either control-plane or none. For example:

If a node is not Ready, verify whether the kubelet process is running. You can also use the system log of the node to investigate the issue.

View pod status

Use the kubectl get po –n namespace | -A command to view the status of a pod. You can specify an individual namespace (such as healthbot, northstar, and common) or you can use the -A parameter to view the status of all namespaces. For example:

The status of healthy pods must be displayed as Running or Completed, and the number of ready containers should match the total. If the status of a pod is not Running or if the number of containers do not match, use the kubectl describe po command to troubleshoot the issue further.

View detailed information about a pod

Use the kubectl describe po -n namespace pod-name command to to view detailed information about a specific pod. For example:

View the logs for a container in a pod

Use the kubectl logs -n namespace pod-name [-c container-name] command to view the logs for a particular pod. If a pod has multiple containers, you must specify the container for which you want to view the logs. For example:

Run a command on a container in a pod

Use the kubectl exec –ti –n namespace pod-name [-c container-name] -- command-line command to run commands on a container inside a pod. For example:

After you run exec the command, you get a bash shell into the Postgres database server. You can access the bash shell inside the container, and run commands to connect to the database itself. Not all containers provide a bash shell. Some containers provide only SSH, and some containers do not have any shells.

View services

Use the kubectl get svc namespace | -A command to view the cluster services. You can specify an individual namespace (such as healthbot, northstar, and common), or you can use -A parameter to view the services for all namespaces. For example:

In this example, the services are sorted by type, and only services of type LoadBalancer are displayed. You can view the services that are provided by the cluster and the external IP addresses that are selected by the load balancer to access those services.

Frequently used kubectl commands

  • List the replication controllers:

  • Restart a component:

  • Edit a Kubernetes resource: You can edit a deployment or any Kubernetes API object, and these changes are saved to the cluster. However, if you reinstall the cluster, these changes are not preserved.