Migrating alert events to ClickHouse after upgrading KUMA from version 3.4.x

After upgrading KUMA from version 3.4.x to version 4.0, you need to migrate alert events from MongoDB to ClickHouse and specify a storage in the alert filling settings.

After upgrading KUMA, the installer stops the kuma-mongodb service, which you must start manually before migrating alert events. If the KUMA Core is deployed in a Kubernetes cluster, the MongoDB container is also not started after the upgrade, so you need to start it before the migration.

If you need to abort the alert event migration command, press Ctrl+C in the terminal. A notification appears stating that the command will be stopped. The current batch of events is safely transferred to the storage, the migration progress is retained, and the process can be continued the next time you run the command.

Migration is available for the following installation types:

Migrating alert events when upgrading KUMA in a non-high-availability configuration

To migrate alert events in a non-high-availability configuration:

  1. In the KUMA web interface, in the Resources → Active services section, select the storage to which you want to move the alert events and click Copy cluster ID. You will need the ID of the storage cluster when running the alert event migration command.
  2. On the KUMA Core server, start kuma-mongodb.service using the following command:

    sudo systemctl start kuma-mongodb.service

  3. Go to the /opt/kaspersky/kuma directory and run the following command, substituting the values into the options and substituting the previously obtained storage cluster ID into the --cluster-id option:

    ./kuma tools migrate --mongo="mongodb://<name of the MongoDB host>:<порт>" --core-url="https://<FQDN of the KUMA Core server>:<port>" --core-dir="<KUMA Core data storage directory: depending on the installation type, /opt/kaspersky/kuma/core/00000000-0000-0000-0000-000000000000 or /opt/kaspersky/kuma/core>" --cluster-id="<ID of the storage cluster where you want to move KUMA 3.4 alerts>" --batch=<number of alerts per batch> 2>&1 | tee /tmp/migrate.log | grep -v "debug"

    Example:

    ./kuma tools migrate --mongo="mongodb://localhost:27017" --core-url="https://kuma.example.com:7210" --core-dir="/opt/kaspersky/kuma/core/00000000-0000-0000-0000-000000000000" --cluster-id="0a123456-789a-0123-ab12-a3fab45ab6a7" --batch=10000 2>&1 | tee /tmp/migrate.log | grep -v "debug"

    Possible settings for alert event migration from MongoDB to ClickHouse

    The complete output of the command is saved to the /tmp/migrate.log file.

    Migration can take a long time if there are many alerts. To speed up the migration process, you can use the --batch option and pre-allocate additional disk space and RAM. For example, you can migrate 150,000 alerts in one iteration lasting approximately three minutes with 12 GB of RAM for the migrator and MongoDB and 30 GB of disk space with the --batch option set to 150000. In this example, the values and migration duration are given as a rough estimate. The duration may differ depending on the number of alerts and the number of events in alerts, the available disk space and RAM, and the number of alert events per batch specified in the --batch option. You can adapt these values to your needs.

    You can also speed up the migration if you want to migrate only some alert events, for example, only alert events for the last year. In this case, you can specify the --time-limit, which corresponds to the Alert retention period, days setting.
    For example, you have alert events for the past year and a half, and you want to migrate alert events for the last year. If you initiate the migration on August 18, 2025, and the Alert retention period, days is set to 365, then set the --time-limit option to August 19, 2024 as follows: --time-limit="Sun, 19 Aug 2024 00:01:00 UTC".
    Only alert events for the last 365 days are migrated. This speeds up migration and avoids moving unwanted events.
    If you do not specify the --time-limit, all alert events are migrated to ClickHouse, but this may take more time, and alert events older than the specified alert retention period will be subsequently deleted from ClickHouse.

  4. After running the command, stop the mongodb service and disable it with the following commands:

    sudo systemctl stop kuma-mongodb.service

    sudo systemctl disable kuma-mongodb.service

Alert events are migrated.

After completing the KUMA upgrade and migrating alert events, the storage for new alert events is not automatically specified. The storage cluster you have specified is only for the migration of existing alert events. For new alerts to be filled with events, you must manually specify the Storage in the alert filling settings. Until you specify this setting, new alerts will be generated without events.

Migrating alert events in when upgrading an installation with KUMA Core in a Kubernetes cluster

To migrate alert events in an installation with KUMA Core in a Kubernetes cluster:

  1. In the KUMA web interface, in the Resources → Active services section, select the storage to which you want to move the alert events and click Copy ID. You will need the ID of the storage cluster when running the alert event migration command.
  2. Start the MongoDB container in the KUMA Core pod by running the following command on the master controller:

    sudo k0s kubectl apply -f /root/k0s/core-manifest.yaml

    The KUMA Core is restarted.

  3. Connect to the master controller of the k0s cluster and start the migration of alert events by substituting the previously obtained storage cluster identifier in the --cluster-id option:

    k0s kubectl exec -it deployment/core-deployment -c core -n kuma -- kuma tools migrate --mongo="mongodb://127.0.0.1:27017" --core-url="https://core:7210" --core-dir="/opt/kaspersky/kuma/core" --cluster-id="ec9e34bc-8c05-4f95-bb37-cfd5fe4b354a" --wd="/opt/kaspersky/kuma/core" --batch=10000 2>&1 | tee /tmp/migrate.log | grep -v "debug"

    Possible settings for alert event migration from MongoDB to ClickHouse

    The complete output of the command is saved to the /tmp/migrate.log file on the host where you run k0s kubectl, not on the volume of the pod.

    Migration can take a long time if there are many alerts. If the SSH session is interrupted, the command continues to run on the pod. In this case, you do not need to run the command again. You can watch the execution of the command in the processes on the worker node where the KUMA Core is running:

    sudo watch 'ps -faux | grep "kuma tools migrate"'

    To speed up the migration process, you can use the --batch option and pre-allocate additional disk space and RAM. For example, you can migrate 150,000 alerts in one iteration lasting approximately 3 minutes with 12 GB of RAM for the migrator and MongoDB and 30 GB of disk space with the --batch option set to 150,000. In this example, the values and migration duration are given as a rough estimate. The duration may differ depending on the number of alerts and the number of events in alerts, the available disk space and RAM, and the number of alert events per batch specified in the --batch option. You can adapt these values to your needs.

    You can also speed up the migration if you want to migrate only some alert events, for example, for the last year. In this case, you can specify the --time-limit, which corresponds to the Alert retention period, days setting.
    For example, you have alert events for the past year and a half, and you want to migrate alert events only for the last year. If you initiate the migration on August 18, 2025, and the Alert retention period, days is set to 365, then set the --time-limit option to August 19, 2024 as follows: --time-limit="Sun, 19 Aug 2024 00:01:00 UTC".
    As a result, only alert events for the last 365 days are migrated. This speeds up migration and avoids moving unwanted events.
    If you do not specify the --time-limit, all alert events are migrated to ClickHouse, but this may take more time, and alert events older than the specified alert retention period will be subsequently deleted from ClickHouse.

  4. After successfully migrating alert events, we recommend starting the KUMA Core without the MongoDB conainer. To do this, on the master controller, run the KUMA Core without the MongoDB container using the following command:

    sudo k0s kubectl apply -f /root/k0s/core-manifest-no-mongodb.yaml

    The KUMA Core is restarted.

Alert events are migrated.

After completing the KUMA upgrade and migrating alert events, the storage for new alert events is not automatically specified. The storage cluster you have specified is only for the migration of existing alert events. For new alerts to be filled with events, you must manually specify the Storage in the alert filling settings. Until you specify this setting, new alerts will be generated without events.

Page top