Sample Sources and Sinks Configuration
Use the information provided in this topic to view sample source and corresponding sink configuration.
According to the scope and format, all sources can be categorized in the following ways:
-
cluster
ornode
—Specifies whether the scope of the collected data is cluster level or node level. -
metric
orlog
—Specifies the format of the observability data that is collected.
Supported Node Sources
Paragon Automation supports the following node log and metric sources.
Syslog
Collect system logs from all primary and worker nodes within the Paragon Automation cluster.
Category: node, log
Sample source configuration:
root@node# set paragon monitoring source syslog node syslog
Sample sink configuration:
root@node# set paragon monitoring sink syslogvlog inputs syslog root@node# set paragon monitoring sink syslogvlog elasticsearch mode bulk root@node# set paragon monitoring sink syslogvlog elasticsearch healthcheck enabled false root@node# set paragon monitoring sink syslogvlog elasticsearch api_version v8 root@node# set paragon monitoring sink syslogvlog elasticsearch compression gzip root@node# set paragon monitoring sink syslogvlog elasticsearch endpoints http://monitoring_node:9428/insert/elasticsearch/ root@node# set paragon monitoring sink syslogvlog elasticsearch query "X-Powered-By#Vector#_msg_field#message#_time#timestamp#_stream_fields#appname,hostname,facility,procid,severity,source_type"
Docker Log
Collect logs from all the docker containers in the primary and worker nodes within the Paragon Automation Kubernetes cluster.
Category: node, log
Sample source configuration:
root@node# set paragon monitoring source docker node docker_logs (optional) root@node# set paragon monitoring source docker node docker_logs include_containers container_id_or_name (optional) root@node# set paragon monitoring source docker node docker_logs exclude_containers container_id_or_name
Sample sink configuration:
root@node# set paragon monitoring sink dockervlog inputs docker root@node# set paragon monitoring sink dockervlog elasticsearch mode bulk root@node# set paragon monitoring sink dockervlog elasticsearch healthcheck enabled false root@node# set paragon monitoring sink dockervlog elasticsearch api_version v8 set paragon monitoring sink dockervlog elasticsearch compression gzip root@node# set paragon monitoring sink dockervlog elasticsearch endpoints http://monitoring_node:9428/insert/elasticsearch/ root@node# set paragon monitoring sink dockervlog elasticsearch query "X-Powered-By#Vector#_msg_field#message#_time#timestamp#_stream_fields#container_name,container_id,stream,image"
An implicit transform audit-parser
is used internally and is required
for this source. The source ID must be audit
and the input field for the
corresponding sink must be audit-parser
.
Paragon Shell cMGD Log
Collect the cMGD log from Paragon Shell.
Category: node, log
Sample source configuration:
root@node# set paragon monitoring source cmgd node cmgd_log
Sample sink configuration:
root@node# set paragon monitoring sink cmgdvlog inputs cmgd root@node# set paragon monitoring sink cmgdvlog elasticsearch mode bulk root@node# set paragon monitoring sink cmgdvlog elasticsearch healthcheck enabled false root@node# set paragon monitoring sink cmgdvlog elasticsearch api_version v8 set root@node# paragon monitoring sink cmgdvlog elasticsearch compression gzip set paragon monitoring sink cmgdvlog elasticsearch endpoints http://monitoring_node:9428/insert/elasticsearch root@node# set paragon monitoring sink cmgdvlog elasticsearch query "X-Powered-By#Vector#_msg_field#message#_tim
Host Metric
Collect host resource usage from the Paragon Automation cluster nodes.
Category: node, metric
Sample source configuration:
root@node# set paragon monitoring source host node host_metrics scrape_interval_secs 60
Sample sink configuration:
root@node# set paragon monitoring sink vm inputs add-hostname root@node# set paragon monitoring sink vm prometheus_remote_write endpoint http://monitoring_node:8428/api/v1/write root@node# set paragon monitoring sink vm prometheus_remote_write compression zstd root@node# set paragon monitoring sink vm prometheus_remote_write healthcheck enabled false
An implicit transform add-hostname
is used internally to add the
hostname field to the processed data. The source ID must be host
and
the input field for the corresponding sink must be add-hostname
.
Supported Cluster Sources
Paragon Automation supports the following cluster log and metric sources.
Kubernetes Log
Collect logs from all Kubernetes pods.
Category: cluster, log
Sample source configuration:
root@node# set paragon monitoring source k8s cluster kubernetes_logs
Sample sink configuration:
root@node# set paragon monitoring sink kuberneteslog inputs k8s root@node# set paragon monitoring sink kuberneteslog elasticsearch mode bulk root@node# set paragon monitoring sink kuberneteslog elasticsearch healthcheck enabled false root@node# set paragon monitoring sink kuberneteslog elasticsearch api_version v8 root@node# set paragon monitoring sink kuberneteslog elasticsearch compression gzip root@node# set paragon monitoring sink kuberneteslog elasticsearch endpoints http://monitoring_node:9428/insert/elasticsearch/ root@node# set paragon monitoring sink kuberneteslog elasticsearch query "X-Powered-By#Vector#_msg_field#message#_time#timestamp#_stream_fields#kubernetes.pod_namespace,kubernetes.pod_name,kubernetes.pod_node_name"
Audit Log
Collect logs from the Paragon Automation audit log.
Category: cluster, log
Sample source configuration:
root@node# set paragon monitoring source audit cluster kafka bootstrap_servers kafka.common:9092 root@node# set paragon monitoring source audit cluster kafka group_id vector-kafka-consumer root@node# set paragon monitoring source audit cluster kafka topics audits-dev
Sample sink configuration:
root@node# set paragon monitoring sink auditvlog inputs audit-parser root@node# set paragon monitoring sink auditvlog elasticsearch mode bulk root@node# set paragon monitoring sink auditvlog elasticsearch healthcheck enabled false root@node# set paragon monitoring sink auditvlog elasticsearch api_version v8 root@node# set paragon monitoring sink auditvlog elasticsearch compression gzip root@node# set paragon monitoring sink auditvlog elasticsearch endpoints http://monitoring_node:9428/insert/elasticsearch root@node# set paragon monitoring sink auditvlog elasticsearch query "X-Powered-By#Vector#_msg_field#message#_time#timestamp#_stream_fields#org_id,site_id,admin_name,src_ip"
Kube State Metric
Collect Kubernetes resource usage from kube-state-metric.
Category: cluster, metric
Sample source configuration:
root@node# set paragon monitoring source ksm cluster prometheus_scrape endpoints http://kube-state-metrics.kube-system:8080/metrics root@node# set paragon monitoring source ksm cluster prometheus_scrape scrape_interval_secs 60
Sample sink configuration:
root@node# set paragon monitoring sink vm inputs add-hostname root@node# set paragon monitoring sink vm prometheus_remote_write endpoint http://monitoring_node:8428/api/v1/write root@node# set paragon monitoring sink vm prometheus_remote_write compression zstd root@node# set paragon monitoring sink vm prometheus_remote_write healthcheck enabled false
An implicit transform add-hostname
is used internally to add the
hostname field to the processed data. The source ID must be ksm
and the
input field for the corresponding sink must be add-hostname
.
Kubernetes Container Metric
Collect container resource usage of Kubernetes pods in the Paragon Automation cluster.
Category: cluster, metric
Sample source configuration:
root@node# set paragon monitoring source k8s_container_metric cluster kubernetes_container_metrics
Sample sink configuration:
root@node# set paragon monitoring sink cadvisor inputs k8s_container_metric root@node# set paragon monitoring sink cadvisor prometheus_remote_write endpoint http://monitoring_node:8428/api/v1/write root@node# set paragon monitoring sink cadvisor prometheus_remote_write compression zstd root@node# set paragon monitoring sink cadvisor prometheus_remote_write healthcheck enabled false
Supported Sinks
All sinks can also be categorized in the following way as log
or
metric
to identify the format of the observability data that the sink
accepts.
A data sink can accept input only from log sources and a metric sink can accept input only from metric sources.
Paragon Automation supports the following cluster log and metric sinks.
Elasticsearch
Send data to a destination that supports the Elasticsearch format.
Category: log
The available options are:
root@node# set paragon monitoring sink id elasticsearch ? Possible completions: api_version The API version of Elasticsearch + apply-groups Groups from which to inherit configuration data + apply-groups-except Don't inherit configuration data from these groups compression Data compression method. Default is none + endpoints HTTP(S) endpoint of sources/sinks > healthcheck Whether or not to check the health of the sink when Vector starts up mode Elasticsearch Indexing mode query Custom parameters to add to the query string for each HTTP request sent to Elasticsearch. In the format of arg1_key#arg1_value#arg2_key#arg2_value... Number of hashtag separated items has to be an even number
Prometheus Remote Write
Deliver metric data to a Prometheus remote write endpoint.
Category: metric
The available options are:
root@node# set paragon monitoring sink id prometheus_remote_write ? Possible completions: + apply-groups Groups from which to inherit configuration data + apply-groups-except Don't inherit configuration data from these groups compression Data compression method. Default is snappy endpoint HTTP(S) endpoint > healthcheck Whether or not to check the health of the sink when Vector starts up
For more information, see https://prometheus.io/docs/practices/remote_write/.
Default Sources and Sinks
When the Paragon Automation cluster is installed for the first time, the following three sources are automatically created:
-
Kube State Metric—
ksm
-
Host—
host
-
Audit log—
audit
root@node# show paragon monitoring source ? Possible completions: <id> ID of the source. Should be of pattern [a-z][a-z0-9_-]* audit ID of the source. Should be of pattern [a-z][a-z0-9_-]* host ID of the source. Should be of pattern [a-z][a-z0-9_-]* ksm ID of the source. Should be of pattern [a-z][a-z0-9_-]*
You can modify the configuration for each default source but the source must not be removed.
You must set up and configure your own sinks on your own network which the Paragon Automation cluster can access.