Data retention
Configure the data retention parameters
This page describes StackState v4.4.x.
The StackState 4.4 version range is End of Life (EOL) and no longer supported. We encourage customers still running the 4.4 version range to upgrade to a more recent release.
Overview
StackState imposes data retention limits to save storage space and improve performance. You can configure the data retention period to provide a balance between the amount of data stored, StackState performance, and data availability.
Retention of topology graph data
By default topology graph data will be retained for 8 days. This works in a way that the latest state of topology graph will always be retained; only history older than 8 days will be removed. You can check and alter the configured retention period this using the StackState CLI.
In some cases, it may be useful to keep historical data for more than eight days.
(note that time value is provided in milliseconds - 10 days equals 864000000 milliseconds)
Note that by adding more time to the data retention period, the amount of data stored is also going to grow and need more storage space. This may also affect the performance of the Views.
After the new retention window is applied, you can schedule a new removal with this command:
After changing the retention period to a smaller window, you may end up with some data that is already expired and will wait there until the next scheduled cleanup. To schedule an additional removal of expired data, use the following command:
Note that this may take some time to have an effect.
However, if you would like to perform data deletion without having to wait for an additional scheduled cleanup, you can use --immediately
argument:
Retention of events, metrics and traces
StackState data store
If you are using the event/metrics/traces store provided with StackState, your data will by default be retained for 30 days. In most cases, the default settings will be sufficient to store all indices for this amount of time.
Configure disk space for Elasticsearch
In some circumstances it may be necessary to adjust the disk space available to Elasticsearch and how it is allocated to each index group, for example if you anticipate a lot of data to arrive for a specific index.
The settings can be adjusted by using environment variables to override the default configuration of the parameters described below.
Note that elasticsearchDiskSpaceMB
will scale automatically based on the disk space available to Elasticsearch in Kubernetes.
elasticsearchDiskSpaceMB
400000
The total disk space assigned to Elasticsearch in MB. The default setting is the recommended disk space for a StackState production setup (400GB).
splittingStrategy
"days"
The frequency of creating new indices. Can be one of "none", "hours", "days", "months" or "years". If "none" is specified, only one index will be used.
maxIndicesRetained
30
The number of indices that will be retained in each index group. Together with the splittingStrategy
governs how long historical data will be kept in Elasticsearch.
diskSpaceWeight
Varies per index group
Defines the share of disk space an index will get based on the total elasticsearchDiskSpaceMB
. If set to 0
then no disk space will be allocated to the index. See the disk space weight examples below.
replicas
Linux: 0
Kubernetes: 1
The number of nodes that a single piece of data should be available on. Use for redundancy/high availability when more than one Elasticsearch node is available.
Disk space weight examples
Use the diskSpaceWeight
configuration parameter to adjust how available disk space is allocated across Elasticsearch index groups. This is helpful if, for example, you expect a lot of data to arrive in a single index. Below are some examples of disk space weight configuration.
Allocate no disk space to an index group
Setting diskSpaceWeight
to 0 will result in no disk space being allocated to an index group. For example, if you are not going to use traces, then you can stop reserving disk space for this index group and make it available to other index groups with the setting:
Distribute disk space unevenly across index groups
The available disk space (the configured elasticsearchDiskSpaceMB
) will be allocated to index groups proportionally based on their configured diskSpaceWeight
. Disk space will be allocated to each index group according to the formula below, this will then be shared between the indices in the index group:
For example, with elasticsearchDiskSpaceMB = 300000
, disk space would be allocated to the index groups and indexes be as follows:
kafkaMetricsToES.elasticsearch.index {
diskSpaceWeight = 0
maxIndicesRetained = 20
}
0MB
kafkaMultiMetricsToES.elasticsearch.index {
diskSpaceWeight = 1
maxIndicesRetained = 20
}
20,000MB or 300,000*1/15
kafkaGenericEventsToES.elasticsearch.index{
diskSpaceWeight = 2
maxIndicesRetained = 20
}
40,000MB or 300,000*2/15
kafkaTopologyEventsToES.elasticsearch.index{
diskSpaceWeight = 3
maxIndicesRetained = 20
}
60,000MB or 300,000*3/15
kafkaStateEventsToES.elasticsearch.index{
diskSpaceWeight = 4
maxIndicesRetained = 20
}
80,000MB or 300,000*4/15
kafkaStsEventsToES.elasticsearch.index{
diskSpaceWeight = 5
maxIndicesRetained = 20
}
100000MB or 300,000*5/15
kafkaTraceToES.elasticsearch.index{
diskSpaceWeight = 0
maxIndicesRetained = 20
}
0MB
External data store
If you have configured your own data source to be accessed by StackState, the retention policy is determined by the metric/event store that you have connected.
Last updated