The update is performed the same way on all hosts using the installer and inventory file.
Version upgrade scheme:
2.0.х → 2.1.3 → 3.0.3
2.1.х → 2.1.3 → 3.0.3
2.1.3 → 3.0.3
3.0.x → 3.0.3
Upgrading from version 2.0.x to 2.1.3
To install KUMA version 2.1.3 over version 2.0.x, complete the preliminary steps and then update.
Preliminary steps
cd /opt/kaspersky/kuma/mongodb/bin/
./mongo
use kuma
db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})
If the component version is different from 4.4, set the value to 4.4 using the following command:
db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })
false
: deploy_to_k8s false
need_transfer false
deploy_example_services false
When the installer uses this inventory file, all KUMA components are upgraded to version 2.1.3. The available services and storage resources are also reconfigured on hosts from the kuma_storage group:
Updating KUMA
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 or when a new installation was performed. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.
Sample inventory file with 3 dedicated controllers, 2 worker nodes, and 1 balancer.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Upgrading from version 2.1.x to 2.1.3
To install KUMA version 2.1.3 over version 2.1.x, complete the preliminary steps and then update.
Preliminary steps
Updating KUMA
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 or when a new installation was performed. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.
Sample inventory file with 3 dedicated controllers, 2 worker nodes, and 1 balancer.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Upgrading from version 2.1.3 to 3.0.3
To install KUMA version 3.0.3 over version 2.1.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 or when a new installation was performed. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.
Sample inventory file with 3 dedicated controllers, 2 worker nodes, and 1 balancer.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.x to 3.0.3
To install KUMA version 3.0.3 over version 3.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 or when a new installation was performed. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.
Sample inventory file with 3 dedicated controllers, 2 worker nodes, and 1 balancer.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
KUMA update completed successfully.
Known limitations
For existing users, upgrading from 3.0.x to 3.0.3 does not update the universal dashboard layout.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
If you want to upgrade a distributed installation of KUMA to the latest version of KUMA in a fault tolerant configuration, first upgrade your distributed installation to the latest version and then migrate KUMA Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 or when a new installation was performed. In the inventory file, set the deploy_to_k8s, need_transfer and airgap parameters to true. The deploy_example_services parameter must be set to false.
Sample inventory file with 3 dedicated controllers, 2 worker nodes, and 1 balancer.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.