Data retention
Configure the data retention parameters
This page describes StackState version 4.3.
The StackState 4.3 version range is End of Life (EOL) and no longer supported. We encourage customers still running the 4.3 version range to upgrade to a more recent release.
Overview
StackState imposes data retention limits to save storage space and improve performance. You can configure the data retention period to provide a balance between the amount of data stored, StackState performance, and data availability.
Retention of topology graph data
By default topology graph data will be retained for 8 days. This works in a way that the latest state of topology graph will always be retained; only history older than 8 days will be removed. You can check and alter the configured retention period this using the StackState CLI.
In some cases, it may be useful to keep historical data for more than eight days.
(note that time value is provided in milliseconds - 10 days equals 864000000 milliseconds)
Please note that by adding more time to the data retention period, the amount of data stored is also going to grow and need more storage space. This may also affect the performance of the Views.
After the new retention window is applied, you can schedule a new removal with this command:
After changing the retention period to a smaller window, you may end up with some data that is already expired and will wait there until the next scheduled cleanup. To schedule an additional removal of expired data, use the following command:
Please note that this may take some time to have an effect.
However, if you would like to perform data deletion without having to wait for an additional scheduled cleanup, you can use --immediately
argument:
Retention of events, metrics and traces
StackState data store
If you are using the event/metrics/traces store provided with StackState, your data will by default be retained for 30 days. In most cases, the default settings will be sufficient to store all indices for this amount of time.
Configure disk space for Elasticsearch
In some circumstances it may be necessary to adjust the disk space available to Elasticsearch and how it is allocated to each index group, for example if you anticipate a lot of data to arrive for a specific index.
The settings can be adjusted by using environment variables to override the default configuration of the parameters described below.
Note that elasticsearchDiskSpaceMB
will scale automatically based on the disk space available to Elasticsearch in Kubernetes.
elasticsearchDiskSpaceMB
400000
The total disk space assigned to Elasticsearch in MB. The default setting is the recommended disk space for a StackState production setup (400GB).
splittingStrategy
"days"
The frequency of creating new indices. Can be one of "none", "hours", "days", "months" or "years". If "none" is specified, only one index will be used.
maxIndicesRetained
30
The number of indices that will be retained in each index group. Together with the splittingStrategy
governs how long historical data will be kept in Elasticsearch.
diskSpaceWeight
Varies per index group
Defines the share of disk space an index will get based on the total elasticsearchDiskSpaceMB
. If set to 0
then no disk space will be allocated to the index. See the disk space weight examples below.
replicas
Linux: 0
Kubernetes: 1
The number of nodes that a single piece of data should be available on. Use for redundancy/high availability when more than one Elasticsearch node is available.
Disk space weight examples
Use the diskSpaceWeight
configuration parameter to adjust how available disk space is allocated across Elasticsearch index groups. This is helpful if, for example, you expect a lot of data to arrive in a single index. Below are some examples of disk space weight configuration.
Allocate no disk space to an index group
Setting diskSpaceWeight
to 0 will result in no disk space being allocated to an index group. For example, if you are not going to use traces, then you can stop reserving disk space for this index group and make it available to other index groups by setting kafkaTraceToES.elasticsearch.index.diskSpaceWeight = 0
.
Distribute disk space unevenly across index groupsThe available disk space (the configured elasticsearchDiskSpaceMB
) will be allocated to index groups proportionally based on their configured diskSpaceWeight
. Disk space will be allocated to each index group according to the formula below, this will then be shared equally between the indicies in the index group (the configured maxIndicesRetained
):
For example, with elasticsearchDiskSpaceMB = 300000
, disk space would be allocated to the index groups and indexes be as follows:
kafkaMetricsToES.elasticsearch.index {
diskSpaceWeight = 0
maxIndicesRetained = 20
}
0MB
0MB
kafkaMultiMetricsToES.elasticsearch.index {
diskSpaceWeight = 1
maxIndicesRetained = 20
}
20000MB (or 300000*1/15)
1000MB (or 20000/20)
kafkaGenericEventsToES.elasticsearch.index{
diskSpaceWeight = 2
maxIndicesRetained = 20
}
40000MB (or 300000*2/15)
2000MB (or 40000/20)
kafkaTopologyEventsToES.elasticsearch.index{
diskSpaceWeight = 3
maxIndicesRetained = 20
}
60000MB (or 300000*3/15)
3000MB (or 60000/20)
kafkaStateEventsToES.elasticsearch.index{
diskSpaceWeight = 4
maxIndicesRetained = 20
}
80000MB (or 300000*4/15)
4000MB (or 80000/20)
kafkaStsEventsToES.elasticsearch.index{
diskSpaceWeight = 5
maxIndicesRetained = 20
}
100000MB (or 300000*5/15)
5000MB (or 100000/20)
kafkaTraceToES.elasticsearch.index{
diskSpaceWeight = 0
maxIndicesRetained = 20
}
0MB
0MB
External data store
If you have configured your own data source to be accessed by StackState, the retention policy is determined by the metric/event store that you have connected.
Last updated