Migrating the KUMA Core to a new Kubernetes cluster
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of spaces):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
- On any controller of the cluster, delete the Ingress object by running the following command:
sudo k0s kubectl delete daemonset/ingress -n ingress
- Check if the migration job exists in the cluster:
sudo k0s kubectl get jobs -n kuma
- If the migration job exists, delete it:
sudo k0s kubectl delete job core-transfer -n kuma
- Go to the console of a host from the kuma_core group.
- Start the KUMA Core services by running the following commands:
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma-core-00000000-0000-0000-0000-000000000000 has been successfully started:
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
- Make sure that the kuma_core group has access to the KUMA interface by host FQDN.
Other hosts may be stopped.
- Go to the directory with the extracted installer and open the roles/k0s_prepare/templates/core-transfer-job.yaml.j2 file for editing.
- In the core-transfer-job.yaml.j2 file, find the following lines:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
- Edit these lines as follows, making sure you keep the indentation (number of spaces):
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
- Save the changes to the file.
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
Page top