StackState core integration
This page describes StackState version 4.3.
The StackState 4.3 version range is End of Life (EOL) and no longer supported. We encourage customers still running the 4.3 version range to upgrade to a more recent release.


The Openshift integration is used to create a near real-time synchronization of topology and associated internal services from an Openshift cluster to StackState. This StackPack allows monitoring of the following:
  • Workloads
  • Nodes, pods, containers and services
  • Configmaps, secrets and volumes
Data flow
The Openshift integration collects topology data in an Openshift cluster as well as metrics and events.
  • Data is retrieved by the deployed StackState Agents and then pushed to StackState via the Agent StackPack and the OpenShift StackPack.
  • In StackState:
    • Topology data is translated into components and relations.
    • Tags defined in Openshift are added to components and relations in StackState.
    • Relevant metrics data is mapped to associated components and relations in StackState. All retrieved metrics data is stored and accessible within StackState.
    • Events are available in the StackState Events Perspective and listed in the details pane of the StackState UI.

StackState Agents

The OpenShift integration collects topology data in an OpenShift cluster as well as metrics and events. To achieve this, different types of StackState Agent are used:
| Component | Required? | Pod name | |:---|:---| | StackState Cluster Agent | ✅ | stackstate-cluster-agent | | StackState Agent | ✅ | stackstate-cluster-agent-agent | | StackState ClusterCheck Agent | - | stackstate-cluster-agent-clusterchecks |
To integrate with other services, a separate instance of the StackState Agent should be deployed on a standalone VM. It is not currently possible to configure a StackState Agent deployed on an Openshift cluster with checks that integrate with other services.

StackState Cluster Agent

StackState Cluster Agent is deployed as a Deployment. There is one instance for the entire OpenShift cluster:
  • Topology and events data for all resources in the cluster are retrieved from the Openshift API
  • Control plane metrics are retrieved from the Openshift API
When cluster checks are enabled, cluster checks configured here are run by one of the deployed StackState ClusterCheck Agent pods.

StackState Agent

StackState Agent V2 is deployed as a DaemonSet with one instance on each node in the OpenShift cluster:
  • Host information is retrieved from the Openshift API.
  • Container information is collected from the Docker daemon.
  • Metrics are retrieved from kubelet running on the node and also from kube-state-metrics if this is deployed on the same node.
By default, metrics are also retrieved from kube-state-metrics if that is deployed on the same node as the StackState Agent pod. This can cause issues on a large OpenShift cluster. To avoid this, it is advisable to enable cluster checks so that metrics are gathered from kube-state-metrics by a dedicated StackState ClusterCheck Agent.

StackState ClusterCheck Agent

Deployed only when clusterChecks.enabled is set to true in values.yaml when the StackState Cluster Agent is deployed. When deployed, default is one instance per cluster. When enabled, cluster checks configured on the StackState Cluster Agent are run by one of the deployed StackState ClusterCheck Agent pods. This is useful to run checks that do not need to run on a specific node and monitor non-containerized workloads such as:
  • Out-of-cluster datastores and endpoints (for example, RDS or CloudSQL).
  • Load-balanced cluster services (for example, Kubernetes services).



The following prerequisites are required to install the Openshift StackPack and deploy the StackState Agent and Cluster Agent:
  • An Openshift Cluster must be up and running.
  • A recent version of Helm 3.
  • A user with permissions to create privileged pods, ClusterRoles, ClusterRoleBindings and SCCs:
    • ClusterRole and ClusterRoleBinding are needed to grant StackState Agents permissions to access the Openshift API.
    • StackState Agents need to run in a privileged pod to be able to gather information on network connections and host information.


Install the Openshift StackPack from the StackState UI StackPacks > Integrations screen. You will need to provide the following parameters:
  • Openshift Cluster Name - A name to identify the cluster. This does not need to match the cluster name used in kubeconfig, however, that is usually a good candidate for a unique name.
If the Agent StackPack is not already installed, this will be automatically installed together with the Openshift StackPack. This is required to work with the StackState Agent, which will need to be deployed on each node in the Openshift cluster.

Deploy the StackState Agent and Cluster Agent

For the Openshift integration to retrieve topology, events and metrics data, you will need to have the following installed on your Openshift cluster:
  • A StackState Agent on each node in the cluster
  • StackState Cluster Agent on one node
  • kube-state-metrics
To integrate with other services, a separate instance of the StackState Agent should be deployed on a standalone VM. It is not currently possible to configure a StackState Agent deployed on an Openshift cluster with checks that integrate with other services.
The StackState Agent, Cluster Agent and kube-state-metrics can be installed together using the Cluster Agent Helm Chart:
  1. 1.
    If you do not already have it, you will need to add the StackState helm repository to the local helm client:
    helm repo add stackstate
    helm repo update
  2. 2.
    Deploy the StackState Agent, Cluster Agent and kube-state-metrics with the helm command provided in the StackState UI after you have installed the StackPack. For large OpenShift clusters, consider enabling cluster checks to run the kubernetes_state check in a StackState ClusterCheck Agent pod.
In addition to the variables included in the provided helm command, it is also recommended to provide a stackstate.cluster.authToken. This is an optional variable, however, if not provided a new, random value will be generated each time a helm upgrade is performed. This could leave some pods in the cluster with an incorrect configuration.
For example:
helm upgrade --install \
--namespace stackstate \
--create-namespace \
--set-string 'stackstate.apiKey'='<your-api-key>' \
--set-string ''='<your-cluster-name>' \
--set-string 'stackstate.cluster.authToken'='<your-cluster-token>' \
--set-string 'stackstate.url'='<your-stackstate-url>/receiver/stsAgent' \
--set 'agent.scc.enabled'=true \
--set 'kube-state-metrics.securityContext.enabled'=false \
stackstate-cluster-agent stackstate/cluster-agent
Full details of the available values can be found in the Cluster Agent Helm Chart documentation (


To check the status of the Openshift integration, check that the StackState Cluster Agent (cluster-agent) pod and all of the StackState Agent (cluster-agent-agent) pods have status ready.
❯ kubectl get deployment,daemonset --namespace stackstate
deployment.apps/stackstate-cluster-agent 1/1 1 1 5h14m
daemonset.apps/stackstate-cluster-agent-agent 10 10 10 10 10 <none> 5h14m


Cluster checks

Optionally, the chart can be configured to start additional StackState Agent V2 pods (1 by default) as StackState ClusterCheck Agent pods that run cluster checks. Cluster checks are configured on the StackState Cluster Agent are run by one of the deployed StackState ClusterCheck Agent pods.

Enable cluster checks

To enable cluster checks and the cluster check Agent pods, create a values.yaml file to deploy the cluster-agent Helm chart and add the following YAML segment:
enabled: true

Kubernetes_state check

The kubernetes_state check is responsible for gathering metrics from kube-state-metrics and sending them to StackState. It is configured on the StackState Cluster Agent and runs in the StackState Agent pod that is on the same node as the kube-state-metrics pod.

Run as a cluster check

In a default deployment, the pod running the StackState Cluster Agent and every deployed StackState Agent need to be able to run the check. In a large OpenShift cluster, this can consume a lot of memory as every pod must be configured with sufficient CPU and memory requests and limits. Since only one of those Agent pods will actually run the check, a lot of CPU and memory resources will be allocated, but will not be used.
To remedy that situation, the kubernetes_state check can be configured to run as a cluster check. The YAML segment below shows how to do that in the values.yaml file used to deploy the cluster-agent chart:
# clusterChecks.enabled -- Enables the cluster checks functionality _and_ the clustercheck pods.
enabled: true
# agent.config.override -- Disables kubernetes_state check on regular agent pods.
- name: auto_conf.yaml
path: /etc/stackstate-agent/conf.d/kubernetes_state.d
data: |
# clusterAgent.config.override -- Defines kubernetes_state check for clusterchecks agents. Auto-discovery
# with ad_identifiers does not work here. Use a specific URL instead.
- name: conf.yaml
path: /etc/stackstate-agent/conf.d/kubernetes_state.d
data: |
cluster_check: true
- kube_state_url: http://YOUR_KUBE_STATE_METRICS_SERVICE_NAME:8080/metrics

Integration details

Data retrieved

The Openshift integration retrieves the following data:


The Openshift integration retrieves all events from the Openshift cluster. The table below shows which event category will be assigned to each event type in StackState:
StackState event category
Openshift events
BackOff ContainerGCFailed ExceededGracePeriod FileSystemResizeSuccessful ImageGCFailed Killing NodeAllocatableEnforced NodeNotReady NodeSchedulable Preempting Pulling Pulled Rebooted Scheduled Starting Started SuccessfulAttachVolume SuccessfulDetachVolume SuccessfulMountVolume SuccessfulUnMountVolume VolumeResizeSuccessful
Created (created container) NodeReady SandboxChanged SuccesfulCreate
All other events


The Openshift integration makes all metrics from the Openshift cluster available in StackState. Relevant metrics are automatically mapped to the associated components.
All retrieved metrics can be browsed or added to a component as a telemetry stream. Select the data source StackState Metrics and type openshift in the Select box to get a full list of all available metrics.


The Openshift integration retrieves components and relations for the Openshift cluster.
The following Openshift topology data is available in StackState as components:
Persistent Volume
The following relations between components are retrieved:
  • Container → PersistentVolume, Volume
  • CronJob → Job
  • DaemonSet → Pod
  • Deployment → ReplicaSet
  • Job → Pod
  • Ingress → Service
  • Namespace → CronJob, DaemonSet, Deployment, Job, ReplicaSet, Service, StatefulSet
  • Node → Cluster relation
  • Pod → ConfigMap, Container, Deployment, Node, PersistentVolume, Secret, Volume
  • ReplicaSet → Pod
  • Service → ExternalService, Pod
  • StatefulSet → Pod
  • Direct communication between processes
  • Process → Process communication via Openshift service
  • Process → Process communication via headless Openshift service


The Openshift integration does not retrieve any traces data.


All tags defined in Openshift will be retrieved and added to the associated components and relations in StackState.

REST API endpoints

The StackState Agent talks to the kubelet and kube-state-metrics API.
The StackState Agent and Cluster Agent connect to the Openshift API to retrieve cluster wide information and Openshift events. The following API endpoints used:
Resource type
REST API endpoint
Metadata > ComponentStatus
GET /api/v1/componentstatuses
Metadata > ConfigMap
GET /api/v1/namespaces/{namespace}/configmaps
Metadata > Event
GET /apis/
Metadata > Namespace
GET /api/v1/namespaces
Network > Endpoints
GET /api/v1/namespaces/{namespace}/endpoints
Network > Ingress
GET /apis/{namespace}/ingresses
Network > Service
GET /api/v1/namespaces/{namespace}/services
Node > Node
GET /api/v1/nodes
Security > Secret
GET /api/v1/secrets
Storage > PersistentVolumeClaimSpec
GET /api/v1/namespaces/{namespace}/persistentvolumeclaims
Storage > VolumeAttachment
GET /apis/
Workloads > CronJob
GET /apis/batch/v1beta1/namespaces/{namespace}/cronjobs
Workloads > DaemonSet
GET /apis/apps/v1/namespaces/{namespace}/daemonsets
Workloads > Deployment
GET /apis/apps/v1/namespaces/{namespace}/deployments
Workloads > Job
GET /apis/batch/v1/namespaces/{namespace}/jobs
Workloads > PersistentVolume
GET /api/v1/persistentvolumes
Workloads > Pod
GET /api/v1/namespaces/{namespace}/pods
Workloads > ReplicaSet
GET /apis/apps/v1/namespaces/{namespace}/replicasets
Workloads > StatefulSet
GET /apis/apps/v1/namespaces/{namespace}/statefulsets
For further details, refer to the Openshift API documentation (

Component actions

A number of actions are added to StackState when the Openshift StackPack is installed. They are available from the Actions section on the right of the screen when an Openshift component is selected or from the component context menu, displayed when you hover over an Openshift component in the Topology Perspective
Available for component types
Show configuration and storage
pods containers
Display the selected pod or container with its configmaps, secrets and volumes
Show dependencies (deep)
deployment replicaset replicationcontroller statefulset daemonset job cronjob pod
Displays all dependencies (up to 6 levels deep) of the selected pod or workload.
Show pods
deployment replicaset replicationcontroller statefulset daemonset job cronjob
Displays the pods for the selected workload.
Show pods & services
Opens a view for the pods/services in the selected namespace
Show services
Open a view for the service and ingress components in the selected namespace
Show workloads
Show workloads in the selected namespace
Details of installed actions can be found in the StackState UI Settings > Actions > Component Actions screen.

Openshift views in StackState

When the Openshift integration is enabled, the following Openshift views are available in StackState for each cluster:
  • Openshift - Applications -
  • Openshift - Infrastructure -
  • Openshift - Namespaces -
  • Openshift - Workload Controllers -

Open source

The code for the StackState Agent Openshift check is open source and available on GitHub at:


Troubleshooting steps for any known issues can be found in the StackState support Knowledge base.


To uninstall the Openshift StackPack, go to the StackState UI StackPacks > Integrations > Openshift screen and click UNINSTALL. All Openshift StackPack specific configuration will be removed from StackState.
To uninstall the StackState Cluster Agent and the StackState Agent from your Openshift cluster, run a Helm uninstall:
helm uninstall <release_name> --namespace <namespace>
# If you used the standard install command provided when you installed the StackPack
helm uninstall stackstate-cluster-agent --namespace stackstate

Release notes

Openshift StackPack v3.7.1 (2021-04-02)
  • Improvement: Enable auto grouping on generated views.
  • Improvement: Update documentation.
  • Improvement: Common bumped from 2.4.3 to 2.5.0
  • Improvement: StackState min version bumped to 4.3.0
Openshift StackPack v3.6.0 (2021-03-08)
  • Feature: Namespaces are now a component in StackState with a namespaces view for each cluster
  • Feature: New component actions for quick navigation on workloads, pods and namespaces
  • Feature: Added a "Pod Scheduled" metric stream to pods
  • Feature: Secrets are now a component in StackState
  • Improvement: The "Desired vs Ready" checks on workloads now use the "Ready replicas" stream instead of the replicas stream.
  • Improvement: Use standard (blue) Kubernetes icons
  • Bug Fix: Fixed Kubernetes synchronization when a component had no labels but only tags
Openshift StackPack v3.5.2 (2020-08-18)
  • Feature: Introduced the Release notes pop up for customer
Openshift StackPack v3.5.1 (2020-08-10)
  • Feature: Introduced Kubernetes specific component identifiers
Openshift StackPack v3.5.0 (2020-08-04)
  • Improvement: Deprecated stackpack specific layers and introduced a new common layer structure.
Openshift StackPack v3.4.0 (2020-06-19)
  • Improvement: Set the stream priorities on all streams.

See also