The Openshift integration is used to create a near real-time synchronization of topology and associated internal services from an Openshift cluster to StackState. This StackPack allows monitoring of the following:
Nodes, pods, containers and services
Configmaps, secrets and volumes
The Openshift integration collects topology data in an Openshift cluster as well as metrics and events.
Data is retrieved by the deployed StackState Agents and then pushed to StackState via the Agent StackPack and the OpenShift StackPack.
Topology data is translated into components and relations.
Tags defined in Openshift are added to components and relations in StackState.
Relevant metrics data is mapped to associated components and relations in StackState. All retrieved metrics data is stored and accessible within StackState.
Events are available in the StackState Events Perspective and listed in the details pane of the StackState UI.
The OpenShift integration collects topology data in an OpenShift cluster as well as metrics and events. To achieve this, different types of StackState Agent are used:
StackState Cluster Agent is deployed as a Deployment. There is one instance for the entire OpenShift cluster:
Topology and events data for all resources in the cluster are retrieved from the Openshift API
Control plane metrics are retrieved from the Openshift API
When cluster checks are enabled, cluster checks configured here are run by one of the deployed StackState ClusterCheck Agent pods.
StackState Agent V2 is deployed as a DaemonSet with one instance on each node in the OpenShift cluster:
Host information is retrieved from the Openshift API.
Container information is collected from the Docker daemon.
Metrics are retrieved from kubelet running on the node and also from kube-state-metrics if this is deployed on the same node.
By default, metrics are also retrieved from kube-state-metrics if that is deployed on the same node as the StackState Agent pod. This can cause issues on a large OpenShift cluster. To avoid this, it is advisable to enable cluster checks so that metrics are gathered from kube-state-metrics by a dedicated StackState ClusterCheck Agent.
Deployed only when
clusterChecks.enabled is set to
values.yaml when the StackState Cluster Agent is deployed. When deployed, default is one instance per cluster. When enabled, cluster checks configured on the StackState Cluster Agent are run by one of the deployed StackState ClusterCheck Agent pods. This is useful to run checks that do not need to run on a specific node and monitor non-containerized workloads such as:
Out-of-cluster datastores and endpoints (for example, RDS or CloudSQL).
Load-balanced cluster services (for example, Kubernetes services).
Read how to enable cluster checks.
The following prerequisites are required to install the Openshift StackPack and deploy the StackState Agent and Cluster Agent:
An Openshift Cluster must be up and running.
A recent version of Helm 3.
A user with permissions to create privileged pods, ClusterRoles, ClusterRoleBindings and SCCs:
ClusterRole and ClusterRoleBinding are needed to grant StackState Agents permissions to access the Openshift API.
StackState Agents need to run in a privileged pod to be able to gather information on network connections and host information.
Install the Openshift StackPack from the StackState UI StackPacks > Integrations screen. You will need to provide the following parameters:
Openshift Cluster Name - A name to identify the cluster. This does not need to match the cluster name used in
kubeconfig, however, that is usually a good candidate for a unique name.
If the Agent StackPack is not already installed, this will be automatically installed together with the Openshift StackPack. This is required to work with the StackState Agent, which will need to be deployed on each node in the Openshift cluster.
For the Openshift integration to retrieve topology, events and metrics data, you will need to have the following installed on your Openshift cluster:
A StackState Agent on each node in the cluster
StackState Cluster Agent on one node
The StackState Agent, Cluster Agent and kube-state-metrics can be installed together using the Cluster Agent Helm Chart:
If you do not already have it, you will need to add the StackState helm repository to the local helm client:
helm repo add stackstate https://helm.stackstate.iohelm repo update
Deploy the StackState Agent, Cluster Agent and kube-state-metrics with the helm command provided in the StackState UI after you have installed the StackPack. For large OpenShift clusters, consider enabling cluster checks to run the kubernetes_state check in a StackState ClusterCheck Agent pod.
Full details of the available values can be found in the Cluster Agent Helm Chart documentation (github.com).
To check the status of the Openshift integration, check that the StackState Cluster Agent (
cluster-agent) pod and all of the StackState Agent (
cluster-agent-agent) pods have status ready.
❯ kubectl get deployment,daemonset --namespace stackstateNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/stackstate-cluster-agent 1/1 1 1 5h14mNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/stackstate-cluster-agent-agent 10 10 10 10 10 <none> 5h14m
Optionally, the chart can be configured to start additional StackState Agent V2 pods (1 by default) as StackState ClusterCheck Agent pods that run cluster checks. Cluster checks are configured on the StackState Cluster Agent are run by one of the deployed StackState ClusterCheck Agent pods.
To enable cluster checks and the cluster check Agent pods, create a
values.yaml file to deploy the
cluster-agent Helm chart and add the following YAML segment:
The kubernetes_state check is responsible for gathering metrics from kube-state-metrics and sending them to StackState. It is configured on the StackState Cluster Agent and runs in the StackState Agent pod that is on the same node as the kube-state-metrics pod.
In a default deployment, the pod running the StackState Cluster Agent and every deployed StackState Agent need to be able to run the check. In a large OpenShift cluster, this can consume a lot of memory as every pod must be configured with sufficient CPU and memory requests and limits. Since only one of those Agent pods will actually run the check, a lot of CPU and memory resources will be allocated, but will not be used.
To remedy that situation, the kubernetes_state check can be configured to run as a cluster check. The YAML segment below shows how to do that in the
values.yaml file used to deploy the
clusterChecks:# clusterChecks.enabled -- Enables the cluster checks functionality _and_ the clustercheck pods.enabled: trueagent:config:override:# agent.config.override -- Disables kubernetes_state check on regular agent pods.- name: auto_conf.yamlpath: /etc/stackstate-agent/conf.d/kubernetes_state.ddata: |clusterAgent:config:override:# clusterAgent.config.override -- Defines kubernetes_state check for clusterchecks agents. Auto-discovery# with ad_identifiers does not work here. Use a specific URL instead.- name: conf.yamlpath: /etc/stackstate-agent/conf.d/kubernetes_state.ddata: |cluster_check: trueinit_config:instances:- kube_state_url: http://YOUR_KUBE_STATE_METRICS_SERVICE_NAME:8080/metrics
The Openshift integration retrieves the following data:
The Openshift integration retrieves all events from the Openshift cluster. The table below shows which event category will be assigned to each event type in StackState:
StackState event category
All other events
The Openshift integration makes all metrics from the Openshift cluster available in StackState. Relevant metrics are automatically mapped to the associated components.
All retrieved metrics can be browsed or added to a component as a telemetry stream. Select the data source StackState Metrics and type
openshift in the Select box to get a full list of all available metrics.
The Openshift integration retrieves components and relations for the Openshift cluster.
The following Openshift topology data is available in StackState as components:
The following relations between components are retrieved:
Container → PersistentVolume, Volume
CronJob → Job
DaemonSet → Pod
Deployment → ReplicaSet
Job → Pod
Ingress → Service
Namespace → CronJob, DaemonSet, Deployment, Job, ReplicaSet, Service, StatefulSet
Node → Cluster relation
Pod → ConfigMap, Container, Deployment, Node, PersistentVolume, Secret, Volume
ReplicaSet → Pod
Service → ExternalService, Pod
StatefulSet → Pod
Direct communication between processes
Process → Process communication via Openshift service
Process → Process communication via headless Openshift service
The Openshift integration does not retrieve any traces data.
All tags defined in Openshift will be retrieved and added to the associated components and relations in StackState.
The StackState Agent talks to the kubelet and kube-state-metrics API.
The StackState Agent and Cluster Agent connect to the Openshift API to retrieve cluster wide information and Openshift events. The following API endpoints used:
REST API endpoint
Metadata > ComponentStatus
Metadata > ConfigMap
Metadata > Event
Metadata > Namespace
Network > Endpoints
Network > Ingress
Network > Service
Node > Node
Security > Secret
Storage > PersistentVolumeClaimSpec
Storage > VolumeAttachment
Workloads > CronJob
Workloads > DaemonSet
Workloads > Deployment
Workloads > Job
Workloads > PersistentVolume
Workloads > Pod
Workloads > ReplicaSet
Workloads > StatefulSet
For further details, refer to the Openshift API documentation (openshift.com).
A number of actions are added to StackState when the Openshift StackPack is installed. They are available from the Actions section on the right of the screen when an Openshift component is selected or from the component context menu, displayed when you hover over an Openshift component in the Topology Perspective
Available for component types
Show configuration and storage
Display the selected pod or container with its configmaps, secrets and volumes
Show dependencies (deep)
deployment replicaset replicationcontroller statefulset daemonset job cronjob pod
Displays all dependencies (up to 6 levels deep) of the selected pod or workload.
deployment replicaset replicationcontroller statefulset daemonset job cronjob
Displays the pods for the selected workload.
Show pods & services
Opens a view for the pods/services in the selected namespace
Open a view for the service and ingress components in the selected namespace
Show workloads in the selected namespace
Details of installed actions can be found in the StackState UI Settings > Actions > Component Actions screen.
When the Openshift integration is enabled, the following Openshift views are available in StackState for each cluster:
Openshift - Applications -
Openshift - Infrastructure -
Openshift - Namespaces -
Openshift - Workload Controllers -
The code for the StackState Agent Openshift check is open source and available on GitHub at:
Troubleshooting steps for any known issues can be found in the StackState support Knowledge base.
To uninstall the Openshift StackPack, go to the StackState UI StackPacks > Integrations > Openshift screen and click UNINSTALL. All Openshift StackPack specific configuration will be removed from StackState.
To uninstall the StackState Cluster Agent and the StackState Agent from your Openshift cluster, run a Helm uninstall:
helm uninstall <release_name> --namespace <namespace># If you used the standard install command provided when you installed the StackPackhelm uninstall stackstate-cluster-agent --namespace stackstate
Openshift StackPack v3.7.1 (2021-04-02)
Improvement: Enable auto grouping on generated views.
Improvement: Update documentation.
Improvement: Common bumped from 2.4.3 to 2.5.0
Improvement: StackState min version bumped to 4.3.0
Openshift StackPack v3.6.0 (2021-03-08)
Feature: Namespaces are now a component in StackState with a namespaces view for each cluster
Feature: New component actions for quick navigation on workloads, pods and namespaces
Feature: Added a "Pod Scheduled" metric stream to pods
Feature: Secrets are now a component in StackState
Improvement: The "Desired vs Ready" checks on workloads now use the "Ready replicas" stream instead of the replicas stream.
Improvement: Use standard (blue) Kubernetes icons
Bug Fix: Fixed Kubernetes synchronization when a component had no labels but only tags
Openshift StackPack v3.5.2 (2020-08-18)
Feature: Introduced the Release notes pop up for customer
Openshift StackPack v3.5.1 (2020-08-10)
Feature: Introduced Kubernetes specific component identifiers
Openshift StackPack v3.5.0 (2020-08-04)
Improvement: Deprecated stackpack specific layers and introduced a new common layer structure.
Openshift StackPack v3.4.0 (2020-06-19)
Improvement: Set the stream priorities on all streams.