Updating previous versions of KUMA
The update is performed the same way on all hosts using the installer and inventory file.
Version upgrade scheme:
2.0.х → 2.1.3 → 3.0.3 → 3.2
2.1.х → 2.1.3 → 3.0.3 → 3.2
2.1.3 → 3.0.3 → 3.2
3.0.x → 3.0.3 → 3.2
Upgrading from version 2.0.x to 2.1.3
To install KUMA version 2.1.3 over version 2.0.x, complete the preliminary steps and then update.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you will be able to restore a backup copy for version 2.0.
KUMA backups created in versions 2.0 and earlier cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.0 backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
- Make sure that all application installation requirements are met.
- Make sure that MongoDB versions are compatible by running the following sequence of commands on the device where KUMA Core is located:
cd /opt/kaspersky/kuma/mongodb/bin/
./mongo
use kuma
db.adminCommand({getParameter: 1, featureCompatibilityVersion: 1})
If the component version is different from 4.4, set the value to 4.4 using the following command:
db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
- If you have a keeper deployed on a separate device in the ClickHouse cluster, install the storage service on the same device before performing the update:
- Use the existing storage of the cluster to create a storage service for the keeper in the web interface.
- Install the service on a device with a dedicated ClickHouse keeper.
- In the inventory file, specify the same hosts that were used when installing KUMA version 2.0.X. Set the following settings to
false
:deploy_to_k8s false
need_transfer false
deploy_example_services false
When the installer uses this inventory file, all KUMA components are upgraded to version 2.1.3. The available services and storage resources are also reconfigured on hosts from the kuma_storage group:
- ClickHouse systemd services are deleted.
- Certificates are deleted from the /opt/kaspersky/kuma/clickhouse/certificates directory.
- The Shard ID, Replica ID, Keeper ID, and ClickHouse configuration override fields are filled in for each node in the storage resource based on values from the inventory and configuration files of the service on the host. Subsequently, you will manage the roles of each node in the KUMA web interface.
- All existing configuration files from the /opt/kaspersky/kuma/clickhouse/cfg directory are deleted (they will be subsequently generated by the storage service).
- The value of the LimitNOFILE parameter (Service section) is changed from 64,000 to 500,000 in the kuma-storage systemd services.
- If you use alert segmentation rules, prepare the data for migrating the existing rules and save. In the next step, you can use this data to re-create the rules. During the update, alert segmentation rules are not migrated automatically.
- To perform an update, you need a valid password from the admin user. If you forgot the admin user password, contact Technical Support to reset the current password and use the new password to perform the update at the next step.
Updating KUMA
- Depending on the KUMA deployment scheme being used, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
- Prepare the k0s.inventory.yml inventory file.
- When upgrading on systems that contain large amounts of data and are operating with limited resources, the system may return the 'Wrong admin password' error message after you enter the administrator password. If you specify the correct password, KUMA may still return an error because KUMA could not start the Core service due to a timeout error and resource limit. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error. Resolve the timeout error to proceed with the update.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Re-create the alert segmentation rules.
- Manually update the KUMA agents.
KUMA update completed successfully.
Upgrading from version 2.1.x to 2.1.3
To install KUMA version 2.1.3 over version 2.1.x, complete the preliminary steps and then update.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you will be able to restore a backup copy for version 2.1.x.
KUMA backups created in versions earlier than 2.1.3 cannot be restored in version 2.1.3. This means that you cannot install KUMA 2.1.3 from scratch and restore a KUMA 2.1.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 2.1.3.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
- To perform an update, you need a valid password from the admin user. If you forgot the admin user password, contact Technical Support to reset the current password and use the new password to perform the update at the next step.
Updating KUMA
- Depending on the KUMA deployment scheme being used, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
- When upgrading on systems that contain large amounts of data and are operating with limited resources, the system may return the 'Wrong admin password' error message after you enter the administrator password. If you specify the correct password, KUMA may still return an error because KUMA could not start the Core service due to a timeout error and resource limit. If you enter the administrator password three times without waiting for the installation to complete, the update may end with a fatal error. Resolve the timeout error to proceed with the update.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Upgrading from version 2.1.3 to 3.0.3
To install KUMA version 3.0.3 over version 2.1.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you can restore data from backup for version 3.0.3.
KUMA backups created in versions 2.1.3 and earlier cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 2.1.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Known limitations
- Hierarchical structure is not supported in 3.0.2, therefore all KUMA hosts become standalone hosts when upgrading from version 2.1.3 to 3.0.2.
- For existing users, upgrading from 2.1.3 to 3.0.2 does not update the universal dashboard layout.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.x to 3.0.3
To install KUMA version 3.0.3 over version 3.0.x, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you can restore data from backup for version 3.0.x.
KUMA backups created in versions earlier than 3.0.3 cannot be restored in version 3.0.3. This means that you cannot install KUMA 3.0.3 from scratch and restore a KUMA 3.0.x backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.0.3.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Known limitations
For existing users, upgrading from 3.0.x to 3.0.3 does not update the universal dashboard layout.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
Upgrading from version 3.0.3 to 3.2
To install KUMA version 3.2 over version 3.0.3, complete the preliminary steps and then perform the upgrade.
Preliminary steps
- Creating a backup copy of the KUMA Core. If necessary, you can restore data from backup for version 3.0.3.
KUMA backups created in versions 3.0.3 and earlier cannot be restored in version 3.2. This means that you cannot install KUMA 3.2 from scratch and restore a KUMA 3.0.3 backup in it.
Create a backup copy immediately after upgrading KUMA to version 3.2.
- Make sure that all application installation requirements are met.
- During installation or update, ensure network accessibility of TCP port 7220 on the KUMA Core for the KUMA storage hosts.
Updating KUMA
Depending on the KUMA deployment scheme being used, do one the following:
- Use the prepared distributed.inventory.yml inventory file and follow the instructions for distributed installation of the application.
- Use the prepared k0s.inventory.yml inventory file and follow the instructions for distributed installation in a high availability configuration.
If an inventory file is not available for the current version, use the provided inventory file template and fill in the corresponding settings. To view a list of hosts and host roles in the current KUMA system, in the web interface, go to Resources → Active services section.
The upgrade process completely reproduces the installation process.
If you want to upgrade from a distributed installation to a distributed installation in a high availability configuration, first upgrade the distributed installation and then migrate the Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.
The final stage of preparing KUMA for work
- After updating KUMA, you must clear your browser cache.
- Manually update the KUMA agents.
KUMA update completed successfully.
Known limitations
- For existing users, upgrading from 3.0.3 to 3.2 does not update the universal dashboard layout.
Possible solution: restart the Core service (kuma-core.service), and the data refresh interval configured for the layout will be used.
- If the old Core service, "kuma-core.service" is still displayed after the upgrade, run the following command after installation is complete:
sudo systemctl reset-failed
After running the command, the old service is no longer displayed, and the new service starts successfully.
If you want to upgrade a distributed installation of KUMA to the latest version of KUMA in a fault tolerant configuration, first upgrade your distributed installation to the latest version and then migrate KUMA Core to a Kubernetes cluster. For further updates, use the k0s.inventory.yml inventory file with the need_transfer parameter set to false, because the KUMA Core already has been migrated to the Kubernetes cluster and you do not need to repeat this.
To migrate KUMA Core to a new Kubernetes cluster:
- Prepare the k0s.inventory.yml inventory file.
The kuma_core, kuma_collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set the deploy_to_k8s and need_transfer parameters to true. The deploy_example_services parameter must be set to false.
- Follow the steps for distributed installation using your prepared k0s.inventory.yml inventory file.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with the inventory file, it looks for an installed KUMA Core on all hosts where you want to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
- Removes the following systemd services from the host: kuma-core, kuma-mongodb, kuma-victoria-metrics, kuma-vmalert, and kuma-grafana.
- Deletes the internal certificate of the Core.
- Deletes the certificate files of all other components and deletes their records from MongoDB.
- Deletes the following directories:
- /opt/kaspersky/kuma/core/bin
- /opt/kaspersky/kuma/core/certificates
- /opt/kaspersky/kuma/core/log
- /opt/kaspersky/kuma/core/logs
- /opt/kaspersky/kuma/grafana/bin
- /opt/kaspersky/kuma/mongodb/bin
- /opt/kaspersky/kuma/mongodb/log
- /opt/kaspersky/kuma/victoria-metrics/bin
- Migrates data from the Core and its dependencies to a network drive within the Kubernetes cluster.
- On the Core host, it migrates the following directories:
- /opt/kaspersky/kuma/core → /opt/kaspersky/kuma/core.moved
- /opt/kaspersky/kuma/grafana → /opt/kaspersky/kuma/grafana.moved
- /opt/kaspersky/kuma/mongodb → /opt/kaspersky/kuma/mongodb.moved
- /opt/kaspersky/kuma/victoria-metrics → /opt/kaspersky/kuma/victoria-metrics.moved
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed are entered into the ConfigMap.