Required Permissions

Overview

All of StackState's own components can run without any extra permissions. However in order to install StackState successfully, you need some additional privileges, or ensure that the requirements described in this page are met.

Autonomous Anomaly Detector (AAD)

In order to run the Autonomous Anomaly Detector, or prepare your Kubernetes cluster to run it, StackState needs to create a ClusterRole and two ClusterRoleBinding resources. Creating these cluster-wide resources is often prohibited for users that are not a Kubernetes administrator.

Disable automatic creation of cluster-wide resources

The automatic creation of cluster-wide resources during installation of StackState can be disabled by adding the following section to the values.yaml file used to install StackState:
values.yaml
1
cluster-role:
2
enabled: false
Copied!
Note that if automatic creation of cluster-wide resources is disabled the Autonomous Anomaly Detector will NOT be able to authenticate against the running StackState installation unless you manually create the cluster-wide resources.

Manually create cluster-wide resources

If you need to manually create the cluster-wide resources, ask your Kubernetes administrator to create the 3 resources below in the Kubernetes cluster.
Ensure that you specify the correct namespace for the bound ServiceAccount for both of the ClusterRoleBinding resources.
clusterrole-authorization.yaml
1
apiVersion: rbac.authorization.k8s.io/v1
2
kind: ClusterRole
3
metadata:
4
name: stackstate-authorization
5
rules:
6
- apiGroups:
7
- rbac.authorization.k8s.io
8
resources:
9
- rolebindings
10
verbs:
11
- list
Copied!
clusterrolebinding-authentication.yaml
1
apiVersion: rbac.authorization.k8s.io/v1
2
kind: ClusterRoleBinding
3
metadata:
4
name: stackstate-authentication
5
roleRef:
6
apiGroup: rbac.authorization.k8s.io
7
kind: ClusterRole
8
name: system:auth-delegator
9
subjects:
10
- kind: ServiceAccount
11
name: stackstate-api
12
namespace: stackstate
Copied!
clusterrolebinding-authorization.yaml
1
apiVersion: rbac.authorization.k8s.io/v1
2
kind: ClusterRoleBinding
3
metadata:
4
name: stackstate-authorization
5
roleRef:
6
apiGroup: rbac.authorization.k8s.io
7
kind: ClusterRole
8
name: stackstate-authorization
9
subjects:
10
- kind: ServiceAccount
11
name: stackstate-api
12
namespace: stackstate
Copied!

Elasticsearch

StackState uses Elasticsearch to store its indices. There are some additional requirements for the nodes that Elasticsearch runs on.
As the vm.max_map_count Linux system setting is usually lower than required for Elasticsearch to start, an init container is used that runs in privileged mode and as the root user. The init container is enabled by default to allow the vm.max_map_count system setting to be changed.

Disable the privileged Elasticsearch init container

In case you and/or your Kubernetes administrators do not want the privileged Elasticsearch init container to be enabled by default, you can disable this behavior in the file values.yaml used to install StackState:
values.yaml
1
elasticsearch:
2
sysctlInitContainer:
3
enabled: false
Copied!
If this is disabled, you will need to ensure that the vm.max_map_count setting is changed from its common default value of 65530 to 262144. If this is not done, Elasticsearch will fail to start up and its pods will be in a restart loop.
To inspect the current vm.max_map_count setting, run the following command. Note that it runs a privileged pod:
1
kubectl run -i --tty sysctl-check-max-map-count --privileged=true --image=busybox --restart=Never --rm=true -- sysctl vm.max_map_count
Copied!
If the current vm.max_map_count setting is not at least 262144, it will need to be increased in a different way or Elasticsearch will fail to start up and its pods will be in a restart loop. The logs will contain an error message like this:
1
bootstrap checks failed
2
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Copied!

Increase Linux system settings for Elasticsearch

Depending on what your Kubernetes administrators prefer, the vm.max_map_count can be set to a higher default on all nodes by either changing the default node configuration (for example via init scripts) or by having a DaemonSet do this right after node startup. The former is very dependent on your Kuberentes cluster setup, so there are no general solutions there.
Below is an example that can be used as a starting point for a DaemonSet to change the vm.max_map_count setting:
1
apiVersion: apps/v1
2
kind: DaemonSet
3
metadata:
4
name: set-vm-max-map-count
5
labels:
6
k8s-app: set-vm-max-map-count
7
spec:
8
selector:
9
matchLabels:
10
name: set-vm-max-map-count
11
template:
12
metadata:
13
labels:
14
name: set-vm-max-map-count
15
spec:
16
# Make sure the setting always gets changed as soon as possible:
17
tolerations:
18
- effect: NoSchedule
19
operator: Exists
20
- effect: NoExecute
21
key: node.kubernetes.io/not-ready
22
operator: Exists
23
# Optional node selector (assumes nodes for Elasticsearch are labeled `elastichsearch:yes`
24
# nodeSelector:
25
# elasticsearch: yes
26
initContainers:
27
- name: set-vm-max-map-count
28
image: busybox
29
securityContext:
30
runAsUser: 0
31
privileged: true
32
command: ["sysctl", "-w", "vm.max_map_count=262144"]
33
resources:
34
limits:
35
cpu: 100m
36
memory: 100Mi
37
requests:
38
cpu: 100m
39
memory: 100Mi
40
# A pause container is needed to prevent a restart loop of the pods in the daemonset
41
# See also this Kuberentes issue https://github.com/kubernetes/kubernetes/issues/36601
42
containers:
43
- name: pause
44
image: google/pause
45
resources:
46
limits:
47
cpu: 50m
48
memory: 50Mi
49
requests:
50
cpu: 50m
51
memory: 50Mi
Copied!
To limit the number of nodes that this is applied to, nodes can be labeled. NodeSelectors on both this DaemonSet, as shown in the example, and the Elasticsearch deployment can then be set to run only on nodes with the specific label. For Elasticsearch, the node selector can be specified via the values:
1
elasticsearch:
2
nodeSelector:
3
elasticsearch: yes
4
sysctlInitContainer:
5
enabled: false
Copied!

See also

Last modified 11d ago