The KUMA configuration can be modified in the following ways.
To expand an all-in-one installation to a distributed installation:
sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID copied from the KUMA web interface> --uninstall
Repeat the removal command for each service.
As a result, only the KUMA Core remains on the initial installation server.
kuma_core
group.In this way, the KUMA Core remains on the original server, and you can deploy the other components on other servers. In the inventory file, specify the servers on which you want to install the KUMA components.
Example inventory file for expanding an all-in-one installation to a distributed installation
all:
vars:
deploy_to_k8s: false
need_transfer: false
generate_etc_hosts: false
deploy_example_services: false
no_firewall_actions: false
kuma:
vars:
ansible_connection: ssh
ansible_user: root
children:
kuma_core:
hosts:
kuma-core-1.example.com:
ip: 0.0.0.0
mongo_log_archives_number: 14
mongo_log_frequency_rotation: daily
mongo_log_file_size: 1G
kuma_collector:
hosts:
kuma-collector-1.example.com:
ip: 0.0.0.0
kuma_correlator:
hosts:
kuma-correlator-1.example.com:
ip: 0.0.0.0
kuma_storage:
hosts:
kuma-storage-cluster1-server1.example.com:
ip: 0.0.0.0
shard: 1
replica: 1
keeper: 0
kuma-storage-cluster1-server2.example.com:
ip: 0.0.0.0
shard: 1
replica: 2
keeper: 0
kuma-storage-cluster1-server3.example.com:
ip: 0.0.0.0
shard: 2
replica: 1
keeper: 0
kuma-storage-cluster1-server4.example.com:
ip: 0.0.0.0
shard: 2
replica: 2
keeper: 0
kuma-storage-cluster1-server5.example.com:
ip: 0.0.0.0
shard: 0
replica: 0
keeper: 1
kuma-storage-cluster1-server6.example.com:
ip: 0.0.0.0
shard: 0
replica: 0
keeper: 2
kuma-storage-cluster1-server7.example.com:
ip: 0.0.0.0
shard: 0
replica: 0
keeper: 3
sudo ./install.sh distributed.inventory.yml
This command creates files necessary to install the KUMA components (storage, collectors, correlators) on each target machine specified in distributed.inventory.yml.
The expansion of the installation is completed.
The following instructions describe adding one or more servers to an existing infrastructure to then install collectors on these servers to balance the load. You can use these instructions as an example and adapt them according to your needs.
To add servers to a distributed installation:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Example expand.inventory.yml inventory file for adding collector servers
kuma:
vars:
ansible_connection: ssh
ansible_user: root
children:
kuma_collector:
kuma-additional-collector1.example.com
kuma-additional-collector2.example.com
kuma_correlator:
kuma_storage:
hosts:
./expand.sh expand.inventory.yml
This command creates files for creating and installing the collector on each target machine specified in the expand.inventory.yml inventory file.
To create a resource set for a collector, in the KUMA web interface, under Resources → Collectors, click Add collector and edit the settings. For more details, see Creating a collector.
At the last step of the configuration wizard, after you click Create and save, a resource set for the collector is created and the collector service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The collector service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
Servers are successfully added.
The following instructions describe adding one or more servers to an existing infrastructure to then install correlators on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.
To add servers to a distributed installation:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Example expand.inventory.yml inventory file for adding correlator servers
kuma:
vars:
ansible_connection: ssh
ansible_user: root
children:
kuma_collector:
kuma_correlator:
kuma-additional-correlator1.example.com
kuma-additional-correlator2.example.com
kuma_storage:
hosts:
./expand.sh expand.inventory.yml
This command creates files for creating and installing the correlator on each target machine specified in the expand.inventory.yml inventory file.
To create a resource set for a correlator, in the KUMA web interface, under Resources → Correlators, click Add correlator and edit the settings. For more details, see Creating a correlator.
At the last step of the configuration wizard, after you click Create and save, a resource set for the correlator is created and the correlator service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The correlator service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
Servers are successfully added.
The following instructions describe adding multiple servers to an existing storage cluster. You can use these instructions as an example and adapt them to your requirements.
To add servers to an existing storage cluster:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Example expand.inventory.yml inventory file for adding servers to an existing storage cluster
kuma:
vars:
ansible_connection: ssh
ansible_user: root
children:
kuma_collector:
kuma_correlator:
kuma_storage:
hosts:
kuma-storage-cluster1-server8.example.com:
kuma-storage-cluster1-server9.example.com:
kuma-storage-cluster1-server10.example.com:
kuma-storage-cluster1-server11.example.com:
./expand.sh expand.inventory.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.
Example:
ClickHouse cluster nodes
<existing nodes>
FQDN: kuma-storage-cluster1server8.example.com
Shard ID: 1
Replica ID: 1
Keeper ID: 0
FQDN: kuma-storage-cluster1server9.example.com
Shard ID: 1
Replica ID: 2
Keeper ID: 0
FQDN: kuma-storage-cluster1server9.example.com
Shard ID: 2
Replica ID: 1
Keeper ID: 0
FQDN: kuma-storage-cluster1server10.example.com
Shard ID: 2
Replica ID: 2
Keeper ID: 0
Now you can create storage services for each ClickHouse cluster node.
This opens the Choose a service window; in that window, select the storage you edited at the previous step and click Create service. Do the same for each ClickHouse storage node you are adding.
As a result, the number of created services must be the same as the number of nodes being added to the ClickHouse cluster, for example, four services for four nodes. The created storage services are displayed in the KUMA web interface in the Resources → Active services section.
The service ID is copied to the clipboard; you need it for running the service installation command.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The storage service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
Servers are successfully added to a storage cluster.
The following instructions describe adding an extra storage cluster to an existing infrastructure. You can use these instructions as an example and adapt them to suit your needs.
To add a storage cluster:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Example expand.inventory.yml inventory file for adding a storage cluster
kuma:
vars:
ansible_connection: ssh
ansible_user: root
children:
kuma_collector:
kuma_correlator:
kuma_storage:
hosts:
kuma-storage-cluster2-server1.example.com
kuma-storage-cluster2-server2.example.com
kuma-storage-cluster2-server3.example.com
kuma-storage-cluster2-server4.example.com
kuma-storage-cluster2-server5.example.com
kuma-storage-cluster2-server6.example.com
kuma-storage-cluster2-server7.example.com
./expand.sh expand.inventory.yml
This command creates files for creating and installing the storage on each target machine specified in the expand.inventory.yml inventory file.
The created resource set for the storage is displayed in the Resources → Storages section. Now you can create storage services for each ClickHouse cluster node.
This opens the Choose a service window; in that window, select the resource set that you created for the storage at the previous step and click Create service. Do the same for each ClickHouse storage.
As a result, the number of created services must be the same as the number of nodes in the ClickHouse cluster, for example, fifty services for fifty nodes. The created storage services are displayed in the KUMA web interface in the Resources → Active services section. Now you need to install storage services on each node of the ClickHouse cluster by using the service ID.
The service ID is copied to the clipboard; you will need it for the service installation command.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The storage service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
The extra storage cluster is successfully added.
To remove a server from a distributed installation:
sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The service is removed.
Servers are removed from the distributed installation.
To remove one or more storage clusters from a distributed installation:
sudo /opt/kaspersky/kuma/kuma <storage> --id <service ID> --uninstall
Repeat for each server.
The service is removed.
The cluster is removed from the distributed installation.
To migrate the KUMA Core to a new Kubernetes cluster:
The kuma_core, kuma_ collector, kuma_correlator, kuma_storage sections of your k0s.inventory.yml inventory file must contain the same hosts that were used when KUMA was upgraded from version 2.1.3 to version 3.0.3 and then to version 3.2, or when a new installation was performed. In the inventory file, set deploy_to_k8s: true, need_transfer: true. Set deploy_example_services: false.
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with an inventory file, the installer looks for an installed KUMA Core on all hosts where you plan to deploy worker nodes of the cluster. If a Core is found, it is moved from its host to the newly created Kubernetes cluster.
Resolving the KUMA Core migration error
Migration of the KUMA Core from a host to a new Kubernetes cluster may be aborted due to a timeout at the Deploy Core transfer job
step. In this case, the following error message is recorded in the log of core-transfer migration tasks:
cp: can't stat '/mnt/kuma-source/core/.lic': No such file or directory
To prevent this error, before you start migrating the KUMA Core:
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yml inventory file. Migrating the KUMA Core from a host to a new Kubernetes cluster will succeed.
If you started migrating the KUMA Core from a host to a new Kubernetes cluster and the migration failed with an error, follow the steps below to fix the error.
To fix the error after attempting to migrate the KUMA Core from a host to a new Kubernetes cluster:
sudo k0s kubectl delete daemonset/ingress -n ingress
sudo k0s kubectl get jobs -n kuma
sudo k0s kubectl delete job core-transfer -n kuma
sudo systemctl start kuma-mongodb
sudo systemctl start kuma-core-00000000-0000-0000-0000-000000000000
sudo systemctl status kuma-core-00000000-0000-0000-0000-000000000000
Other hosts do not need to be running.
cp /mnt/kuma-source/core/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/.tenantsEPS {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.lic {{ core_k0s_home }}/ &&
cp /mnt/kuma-source/core/{{ core_uid }}/.tenantsEPS {{ core_k0s_home }}/ &&
You can then restart the distributed installation using the prepared k0s.inventory.yaml inventory file. The migration of the KUMA Core from a host to a new Kubernetes cluster will succeed.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually recreated with the new Core in the KUMA web interface.
For collectors, correlators and storages from the inventory file, certificates for communication with the Core inside the cluster will be reissued. This does not change the URL of the Core for components.
On the Core host, the installer does the following:
After you have verified that the Core was correctly migrated to the cluster, you can delete these directories.
If you encounter problems with the migration, check the logs for records of the 'core-transfer' migration task in the 'kuma' namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must restore the original names of the /opt/kaspersky/kuma/*.moved directories.
If the /etc/hosts file on the Core host contained lines that were not related to addresses in the 127.X.X.X range, the contents of the /etc/hosts file from the Core host is entered into the coredns ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of the /etc/hosts file from the host where the primary controller is deployed is entered into the ConfigMap.