Required Permissions
StackState Self-hosted v5.1.x
Overview
All of StackState's own components can run without any extra permissions. However, to install StackState successfully, you need some additional privileges, or ensure that the requirements described in this page are met.
Autonomous Anomaly Detector (AAD)
To run the Autonomous Anomaly Detector, or prepare your cluster to run it, StackState needs to create a ClusterRole
and two ClusterRoleBinding
resources. Creating these cluster-wide resources is often prohibited for users that aren't a Kubernetes/OpenShift administrator.
Disable automatic creation of cluster-wide resources
The automatic creation of cluster-wide resources during installation of StackState can be disabled by adding the following section to the values.yaml
file used to install StackState:
Note that if automatic creation of cluster-wide resources is disabled the Autonomous Anomaly Detector will NOT be able to authenticate against the running StackState installation unless you manually create the cluster-wide resources.
Manually create cluster-wide resources
If you need to manually create the cluster-wide resources, ask your Kubernetes/OpenShift administrator to create the 3 resources below in the cluster.
Ensure that you specify the correct namespace for the bound ServiceAccount
for both of the ClusterRoleBinding
resources.
Elasticsearch
StackState uses Elasticsearch to store its indices. There are some additional requirements for the nodes that Elasticsearch runs on.
As the vm.max_map_count
Linux system setting is usually lower than required for Elasticsearch to start, an init container is used that runs in privileged mode and as the root user. The init container is enabled by default to allow the vm.max_map_count
system setting to be changed.
Disable the privileged Elasticsearch init container
In case you or your Kubernetes/OpenShift administrators don't want the privileged Elasticsearch init container to be enabled by default, you can disable this behavior in the file values.yaml
used to install StackState:
If this is disabled, you will need to ensure that the vm.max_map_count
setting is changed from its common default value of 65530
to 262144
. If this isn't done, Elasticsearch will fail to start up and its pods will be in a restart loop.
To inspect the current vm.max_map_count
setting, run the following command. Note that it runs a privileged pod:
If the current vm.max_map_count
setting isn't at least 262144
, it will need to be increased in a different way or Elasticsearch will fail to start up and its pods will be in a restart loop. The logs will contain an error message like this:
Increase Linux system settings for Elasticsearch
If your Kubernetes/OpenShift administrators prefer, the vm.max_map_count
can be set to a higher default on all nodes. To do this, either change the default node configuration (for example, via init scripts) or have a DaemonSet do this straight after node startup. The former option is very dependent on your cluster setup, so there are no general solutions there.
Below is an example that can be used as a starting point for a DaemonSet to change the vm.max_map_count
setting:
To limit the number of nodes that this is applied to, nodes can be labeled. NodeSelectors on both this DaemonSet, as shown in the example, and the Elasticsearch deployment can then be set to run only on nodes with the specific label. For Elasticsearch, the node selector can be specified via the values:
See also
Last updated