The update is performed the same way on all hosts using the installer and inventory file.
Version upgrade scheme:
2.0.х → 2.1.3 → 3.0.3 → 3.2.x
2.1.х → 2.1.3 → 3.0.3 → 3.2.x
2.1.3 → 3.0.3 → 3.2.x
3.0.x → 3.0.3 → 3.2.x
Upgrading from version 2.0.x to 2.1.3
To install KUMA version 2.1.3 over version 2.0.x, complete the preliminary steps and then update.
Preliminary steps
KUMA backups created in versions 2.0 and earlier cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.0 backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
cd /opt/kaspersky/kuma/mongodb/bin/
./mongo
use kuma
db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})
If the component version is different from 4.4, set the value to 4.4 using the following command:
db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })
false
: deploy_to_k8s false
need_transfer false
deploy_example_services false
When the installer uses this inventory file, all KUMA components are upgraded to version 2.1.3. The available services and storage resources are also reconfigured on hosts from the kuma_storage group:
Updating KUMA
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts may be stopped.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Upgrading from version 2.1.x to 2.1.3
To install KUMA version 2.1.3 over version 2.1.x, complete the preliminary steps and then update.
Preliminary steps
KUMA backups created in versions earlier than 2.1.3 cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.1.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
Updating KUMA
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts may be stopped.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Upgrading from version 2.1.3 to 3.0.3
To install KUMA version 3.0.3 over version 2.1.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions 2.1.3 and earlier cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 2.1.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts may be stopped.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.x to 3.0.3
To install KUMA version 3.0.3 over version 3.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions earlier than 3.0.3 cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 3.0.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts may be stopped.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
For existing users, upgrading from 3.0.x to 3.0.3 does not update the universal dashboard layout.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.3 to 3.2.x
To install KUMA version 3.2.x over version 3.0.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions 3.0.3 and earlier cannot be restored in version 3.2.x. This means that you cannot install KUMA 3.2.x from scratch and restore a KUMA 3.0.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.2.x.
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts may be stopped.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
sudo systemctl reset-failed
After running the command, the old service is no longer displayed, and the new service starts successfully.
If you want to upgrade a distributed installation of KUMA to the latest version of KUMA in a fault tolerant configuration, first upgrade your distributed installation to the latest version and then migrate KUMA Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
Troubleshooting the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be interrupted due to a timeout at the Deploy Core transfer job
step. This records the following error message in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed, follow the steps below to troubleshoot the error.
To troubleshoot the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts may be stopped.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.