OpenShift
StackState Self-hosted v4.6.x
Last updated
StackState Self-hosted v4.6.x
Last updated
This page describes StackState version 4.6.
StackState Agent V2
To retrieve topology, events and metrics data from a OpenShift cluster, you will need to have the following installed in the cluster:
StackState Agent V2 on each node in the cluster
StackState Cluster Agent on one node
kube-state-metrics
To integrate with other services, a separate instance of the StackState Agent should be deployed on a standalone VM.
The OpenShift integration collects topology data in an OpenShift cluster, as well as metrics and events. To achieve this, different types of StackState Agent are used:
Component | Pod name |
---|---|
| |
| |
|
To integrate with other services, a separate instance of the StackState Agent should be deployed on a standalone VM. It is not currently possible to configure a StackState Agent deployed on an OpenShift cluster with checks that integrate with other services.
StackState Cluster Agent is deployed as a Deployment. There is one instance for the entire cluster:
Topology and events data for all resources in the cluster are retrieved from the OpenShift API
Control plane metrics are retrieved from the OpenShift API
When cluster checks are enabled, cluster checks configured here are run by the deployed StackState ClusterCheck Agent pod.
StackState Agent V2 is deployed as a DaemonSet with one instance on each node in the cluster:
Host information is retrieved from the OpenShift API.
Container information is collected from the Docker daemon.
Metrics are retrieved from kubelet running on the node and also from kube-state-metrics if this is deployed on the same node.
By default, metrics are also retrieved from kube-state-metrics if that is deployed on the same node as the StackState Agent pod. This can cause issues on a large Kubernetes cluster. To avoid this, it is advisable to enable cluster checks so that metrics are gathered from kube-state-metrics by a dedicated StackState ClusterCheck Agent.
The StackState ClusterCheck Agent is an additional StackState Agent V2 pod that is deployed only when cluster checks are enabled in the Helm chart. When deployed, cluster checks configured on the StackState Cluster Agent will be run by the StackState ClusterCheck Agent pod.
On large OpenShift clusters, you can run the kubernetes_state
check on the ClusterCheck Agent. This check gathers metrics from kube-state-metrics and sends them to StackState. The ClusterCheck Agent is also useful to run checks that do not need to run on a specific node and monitor non-containerized workloads such as:
Out-of-cluster datastores and endpoints (for example, RDS or CloudSQL).
Load-balanced cluster services (for example, Kubernetes services).
The AWS check can be configured to run as a cluster check.
StackState Agent v2.16.0 is supported to monitor the following versions of OpenShift:
OpenShift 4.3 - 4.8
Default networking
Container runtime:
Docker
containerd
CRI-O
StackState Agent connects to the StackState Receiver API at the specified StackState Receiver API address. The correct address to use is specific to your installation of StackState.
The StackState Agent, Cluster Agent and kube-state-metrics can be installed together using the Cluster Agent Helm Chart:
If you do not already have it, you will need to add the StackState helm repository to the local helm client:
Deploy the StackState Agent, Cluster Agent and kube-state-metrics with the helm command provided in the StackState UI after you have installed the StackPack. Additional variables can be added to the standard helm command, for example:
It is recommended to provide a stackstate.cluster.authToken
.
For large OpenShift clusters, enable cluster checks to run the kubernetes_state check in a StackState ClusterCheck Agent pod.
If you use a custom socket path, set the agent.containerRuntime.customSocketPath
.
Details of all available helm chart values can be found in the Cluster Agent Helm Chart documentation (github.com).
Details of all available helm chart values can be found in the Cluster Agent Helm Chart documentation (github.com).
It is recommended to provide a stackstate.cluster.authToken
in addition to the standard helm chart variables when the StackState Agent is deployed. This is an optional variable, however, if not provided a new, random value will be generated each time a helm upgrade is performed. This could leave some pods in the cluster with an incorrect configuration.
For example:
It is not necessary to configure this property if your cluster uses one of the default socket paths (/var/run/docker.sock
, /var/run/containerd/containerd.sock
or /var/run/crio/crio.sock
)
If your cluster uses a custom socket path, you can provide it using the key agent.containerRuntime.customSocketPath
. For example:
To upgrade the Agents running in your OpenShift cluster, run the helm upgrade command provided on the StackState UI StackPacks > Integrations > OpenShift screen. This is the same command used to deploy the StackState Agent and Cluster Agent.
Optionally, the chart can be configured to start an additional StackState Agent V2 pod as a StackState ClusterCheck Agent pod. Cluster checks that are configured on the StackState Cluster Agent will then be run by the deployed StackState ClusterCheck Agent pod.
To enable cluster checks and deploy the ClusterCheck Agent pod, create a values.yaml
file to deploy the cluster-agent
Helm chart and add the following YAML segment:
The following integrations have checks that can be configured to run as cluster checks:
Kubernetes integration - Kubernetes_state check as a cluster check.
OpenShift integration - OpenShift Kubernetes_state check as a cluster check.
AWS integration - AWS check as a cluster check.
StackState Agent V2 can be configured to reduce data production, tune the process blacklist, or turn off specific features when not needed. The required settings are described in detail on the page advanced Agent configuration.
To integrate with other external services, a separate instance of the StackState Agent should be deployed on a standalone VM. It is not currently possible to configure a StackState Agent deployed on an OpenShift cluster with checks that integrate with other services.
To check the status of the OpenShift integration, check that the StackState Cluster Agent (cluster-agent
) pod and all of the StackState Agent (cluster-agent-agent
) pods have status READY
.
To find the status of an Agent check:
Find the Agent pod that is running on the node where you would like to find a check status:
Run the command:
Look for the check name under the Checks
section.
To uninstall the StackState Cluster Agent and the StackState Agent from your OpenShift cluster, run a Helm uninstall: