Migrate from Linux install

Overview

This document describes how to migrate data from the Linux install of StackState to the Kubernetes install.
The Kubernetes installation of StackState should be v4.2.5 or higher to execute this procedure.

High level steps

To migrate from the Linux install to the Kubernetes install of StackState, the following high level steps need to be performed:
  1. 1.
    ​Install StackState on Kubernetes.
  2. 2.
    ​Migrate StackState configuration and topology data (StackGraph) from the Linux install to the Kubernetes install.
  3. 3.
    ​Migrate telemetry data (Elasticsearch) from the Linux install to the Kubernetes install.
Incoming data from agents (Kafka) and node synchronisation data (Zookeeper) will not be copied.
After the migration:
  1. 1.
    Run both instances of StackState side by side for a number of days to ensure that the new instance runs correctly.
  2. 2.
    Stop the Linux install for StackState.
  3. 3.
    Remove the Linux install for StackState.

Step 2 - Migrate StackState configuration and topology data (StackGraph)

Prerequisites

Before you start the migration procedure, make sure you have the following information and tools available:

Export StackGraph data

To export the StackGraph data, execute the regular StackState Linux backup procedures as described below.
  1. 1.
    Ensure that the StackGraph node is up and running.
  2. 2.
    Login to the StackState node as user root.
  3. 3.
    Stop the StackState service:
    1
    systemctl stop stackstate.service
    Copied!
  4. 4.
    Create a directory to store the exported data:
    1
    sudo -u stackstate mkdir -p /opt/stackstate/migration
    Copied!
  5. 5.
    Export the StackGraph data by creating a backup:
    1
    sudo -u stackstate /opt/stackstate/bin/sts-standalone.sh export \
    2
    --file /opt/stackstate/migration/sts-export.graph --graph default
    Copied!
  6. 6.
    Copy the file /opt/stackstate/migration/sts-export.graph to a safe location.
  7. 7.
    Start the StackState service:
    1
    systemctl start stackstate.service
    Copied!

Import StackGraph data

To import the StackGraph data into the Kubernetes installation, the same MinIO (min.io) component that is used for the backup/restore functionality will be used.
Note that the StackState automatic Kubernetes backup functionality should not be enabled until after the migration procedure has completed.
  1. 1.
    Enable the MinIO component by adding the following YAML fragment to the values.yaml file that is used to install StackState
    1
    backup:
    2
    enabled: true
    3
    stackGraph:
    4
    scheduled:
    5
    enabled: false
    6
    elasticsearch:
    7
    restore:
    8
    enabled: false
    9
    scheduled:
    10
    enabled: false
    11
    minio:
    12
    accessKey: MINIO_ACCESS_KEY
    13
    secretKey: MINIO_SECRET_KEY
    14
    persistence:
    15
    enabled: true
    Copied!
    Include the credentials to access the MinIO instance:
    • Replace MINIO_ACCESS_KEY with 5 to 20 alphanumerical characters.
    • Replace MINIO_SECRET_KEY with 8 to 40 alphanumerical characters.
1
The Helm values `backup.stackGraph.scheduled.enabled`, `backup.elasticsearch.restore.enabled` and `backup.elasticsearch.scheduled.enabled` have been set to `false` to prevent scheduled backups from overwriting the backups that we will upload to MinIO.
Copied!
  1. 1.
    Run the appropriate helm upgrade command for your installation to enable MinIO.
  2. 2.
    Start a port-forward to the MinIO service in your StackState instance:
    1
    kubectl port-forward service/stackstate-minio 9000:9000
    Copied!
  3. 3.
    In a new terminal window, configure the MinIO client to connect to that MinIO service:
    1
    mc alias set minio-backup http://localhost:9000 ke9Dm7eFhk9kP53rXlUI mNOWCpoYrhwati7QcOrEwnI7Mtcf0jxg2JzNOMk6
    Copied!
  4. 4.
    Verify that access has been configured correctly:
    1
    mc ls minio-backup
    Copied!
    The output should be empty, as we have not created any buckets yet.
    If the output is not empty, the automatic backup functionality has been enabled. Disable the automatic backup functionality and configure MinIO as described above (i.e. not as a gateway to AWS S3 or Azure Blob Storage and without any local storage).
  5. 5.
    Create the bucket that is used to store StackGraph buckets:
    1
    mc mb minio-backup/sts-stackgraph-backup
    Copied!
    The output should look like this:
    1
    Bucket created successfully `minio-backup/sts-stackgraph-backup`.
    Copied!
  6. 6.
    Upload the backup file created in the previous step when StackGraph data was exported from the Linux install:
    1
    mc cp sts-export.graph minio-backup/sts-stackgraph-backup/
    Copied!
    The output should look like this:
    1
    sts-export.graph: 15.22 KiB / 15.22 KiB β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“β–“ 42.61 KiB/s 0s
    Copied!
  7. 7.
    Verify that the backup file was uploaded to the correct location:
    1
    ./restore/list-stackgraph-backups.sh
    Copied!
    The output should look like this:
    1
    job.batch/stackgraph-list-backups-20210222t122522 created
    2
    Waiting for job to start...
    3
    Waiting for job to start...
    4
    === Listing StackGraph backups in bucket "sts-stackgraph-backup"...
    5
    sts-export.graph
    6
    ===
    7
    job.batch "stackgraph-list-backups-20210222t122522" deleted
    Copied!
    Most importantly, the backup file uploaded in the previous step should be listed here.
  8. 8.
    Restore the backup:
    1
    ./restore/restore-stackgraph-backup.sh sts-export.graph
    Copied!
    The output should look like this:
    1
    job.batch/stackgraph-restore-20210222t171035 created
    2
    Waiting for job to start...
    3
    Waiting for job to start...
    4
    === Downloading StackGraph backup "sts-export.graph" from bucket "sts-stackgraph-backup"...
    5
    download: s3://sts-stackgraph-backup/sts-export.graph to ../../tmp/sts-export.graph
    6
    === Importing StackGraph data from "sts-export.graph"...
    7
    WARNING: An illegal reflective access operation has occurred
    8
    WARNING: Illegal reflective access by org.codehaus.groovy.vmplugin.v7.Java7$1 (file:/opt/docker/lib/org.codehaus.groovy.groovy-2.5.4.jar) to constructor java.lang.invoke.MethodHandles$Lookup(java.lang.Class,int)
    9
    WARNING: Please consider reporting this to the maintainers of org.codehaus.groovy.vmplugin.v7.Java7$1
    10
    WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
    11
    WARNING: All illegal access operations will be denied in a future release
    12
    ===
    13
    job.batch "stackgraph-restore-20210222t171035" deleted
    Copied!
  9. 9.
    Remove the YAML snippet added in step 1 and run the appropriate helm upgrade command for your installation to disable MinIO.

Step 3 - Migrate telemetry data (Elasticsearch)

To migrate Elasticsearch data from the Linux install to the Kubernetes install, use the functionality reindex from remote (elastic.co).
Notes:
  • To access the Elasticsearch instance that runs as part of the Kubernetes installation for StackState, execute the following command:
    1
    kubectl port-forward service/stackstate-elasticsearch-master 9200:9200
    Copied!
    and access it on http://localhost:9200.
  • To modify the elasticsearch.yml configuration file, use the Helm chart value stackstate.elasticsearch.esConfig.
    For example:
    1
    stackstate:
    2
    elasticsearch:
    3
    esConfig:
    4
    elasticsearch.yml: |
    5
    reindex.remote.whitelist: oldhost:9200
    Copied!

See also

Last modified 11d ago