The upgrade procedure is the same for all hosts and involves using the installer and inventory file.
Version upgrade scheme:
2.0.х → 2.1.3 → 3.0.3 → 3.2.x
2.1.х → 2.1.3 → 3.0.3 → 3.2.x
2.1.3 → 3.0.3 → 3.2.x
3.0.x → 3.0.3 → 3.2.x
Upgrading from version 2.0.x to 2.1.3
To install KUMA version 2.1.3 over version 2.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions 2.0 and earlier cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.0 backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
cd /opt/kaspersky/kuma/mongodb/bin/
./mongo
use kuma
db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})
If the component version is different from 4.4, set the version to 4.4 using the following command:
db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })
false
: deploy_to_k8s false
need_transfer false
deploy_example_services false
When the installer uses this inventory file, all KUMA components are upgraded to version 2.1.3. The available services and storage resources are also reconfigured on hosts from the kuma_storage group:
Upgrading KUMA
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA is successfully upgraded.
Upgrading from version 2.1.x to 2.1.3
To install KUMA version 2.1.3 over version 2.1.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions earlier than 2.1.3 cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.1.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
Upgrading KUMA
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Upgrading from version 2.1.3 to 3.0.3
To install KUMA version 3.0.3 over version 2.1.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions 2.1.3 and earlier cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 2.1.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
Updating KUMA
Depending on the KUMA deployment scheme that you are using, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
Possible solution: restart the Core service (kuma-core.service), and the data will be refreshed with the interval configured for the layout.
Upgrading from version 3.0.x to 3.0.3
To install KUMA version 3.0.3 over version 3.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions earlier than 3.0.3 cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 3.0.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
Updating KUMA
Depending on the KUMA deployment scheme that you are using, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, you must first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
For existing users, after upgrading from 3.0.x to 3.0.3, the universal dashboard layout is not refreshed.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.3 to 3.2.x
To install KUMA version 3.2.x over version 3.0.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
KUMA backups created in versions 3.0.3 and earlier cannot be restored in version 3.2.x. This means that you cannot install KUMA 3.2.x from scratch and restore a KUMA 3.0.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.2.x.
Updating KUMA
Depending on the KUMA deployment scheme that you are using, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and edit it as necessary. To view a list of hosts and host roles in your current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process mirrors the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster. For subsequent upgrades, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
sudo systemctl reset-failed
After running the command, the old service is no longer displayed, and the new service starts successfully.
If you want to upgrade a distributed installation of KUMA to the latest version of KUMA in a fault tolerant configuration, first upgrade your distributed installation to the latest version and then migrate KUMA Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.