Kaspersky Unified Monitoring and Analysis Platform

Migrating the KUMA Core to a new Kubernetes cluster

April 8, 2024

ID 244734

To migrate KUMA Core to a new Kubernetes cluster:

  1. Prepare the k0s.inventory.yml inventory file.

    The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 or when a new installation was performed. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.

    Sample inventory file with 3 dedicated controllers, 2 worker nodes, and 1 balancer.

  2. Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.

Migrating the KUMA Core to a new Kubernetes cluster

When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.

If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.

Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.

On the Core host, the installer does the following:

  • Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
  • Deletes the internal certificate of the Core.
  • Deletes the certificate files of all other components and deletes their records from MongoDB.
  • Deletes the following directories:
    • /opt/kaspersky/kuma/core/bin
    • /opt/kaspersky/kuma/core/certificates
    • /opt/kaspersky/kuma/core/log
    • /opt/kaspersky/kuma/core/logs
    • /opt/kaspersky/kuma/grafana/bin
    • /opt/kaspersky/kuma/mongodb/bin
    • /opt/kaspersky/kuma/mongodb/log
    • /opt/kaspersky/kuma/victoria-metrics/bin
  • Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
  • On the Core host, it migrates the following directories:
    • /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
    • /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
    • /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
    • /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved

After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.

If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).

If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.

If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.

See also:

Distributed installation in a high availability configuration

Did you find this article helpful?
What can we do better?
Thank you for your feedback! You're helping us improve.
Thank you for your feedback! You're helping us improve.