The following KUMA configuration changes can be performed.
To expand an all-in-one installation to a distributed installation:
sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --id <service ID copied from the KUMA web interface> --uninstall
Repeat the removal command for each service.
As a result, only the KUMA Core remains on the initial installation server.
kuma_core
group.In this way, the KUMA Core remains on the original server, and you can deploy the other components on other servers. Specify the servers on which you want to install the KUMA components in the inventory file.
Sample inventory file for expanding an all-in-one installation to a distributed installation
sudo ./install.sh distributed.inventory.yml
Running the command causes the files necessary to install the KUMA components (storages, collectors, correlators) to appear on each target machine specified in the distributed.inventory.yml inventory file.
The expansion of the installation is completed.
The following instructions show how to add one or more servers to an existing infrastructure and then install collectors on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.
To add servers to a distributed installation:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Sample expand.inventory.yml inventory file for adding collector servers
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the collector.
To create a set of resources for a collector, in the KUMA web interface, under Resources → Collectors, click Add collector and edit the settings. For more details, see Creating a collector.
At the last step of the configuration wizard, after you click Create and save, a resource set for the collector is created and the collector service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The collector service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
Servers are successfully added.
The following instructions show how to add one or more servers to an existing infrastructure and then install correlators on these servers to balance the load. You can use these instructions as an example and adapt them to your requirements.
To add servers to a distributed installation:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Sample expand.inventory.yml inventory file for adding correlator servers
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the correlator.
To create a resource set for a correlator, in the KUMA web interface, under Resources → Correlators, click Add correlator and edit the settings. For more details, see Creating a correlator.
At the last step of the configuration wizard, after you click Create and save, a resource set for the correlator is created and the correlator service is automatically created. The command for installing the service on the server is also automatically generated and displayed on the screen. Copy the installation command and proceed to the next step.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The correlator service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
Servers are successfully added.
The following instructions show how to add multiple servers to an existing storage cluster. You can use these instructions as an example and adapt them to your requirements.
To add servers to an existing storage cluster:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Sample expand.inventory.yml inventory file for adding servers to an existing storage cluster
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.
Example:
ClickHouse cluster nodes
<existing nodes>
FQDN: kuma-storage-cluster1server8.example.com
Shard ID: 1
Replica ID: 1
Keeper ID: 0
FQDN: kuma-storage-cluster1server9.example.com
Shard ID: 1
Replica ID: 2
Keeper ID: 0
FQDN: kuma-storage-cluster1server9.example.com
Shard ID: 2
Replica ID: 1
Keeper ID: 0
FQDN: kuma-storage-cluster1server10.example.com
Shard ID: 2
Replica ID: 2
Keeper ID: 0
Now you can create storage services for each ClickHouse cluster node.
This opens the Choose a service window; in that window, select the storage you edited at the previous step and click Create service. Do the same for each ClickHouse storage node you are adding.
As a result, the number of created services must be the same as the number of nodes added to the ClickHouse cluster, that is, four services for four nodes. The created storage services are displayed in the KUMA web interface in the Resources → Active services section. Now storage services must be installed on each server by using the service ID.
The service ID is copied to the clipboard; you need it for running the service installation command.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The storage service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
Servers are successfully added to a storage cluster.
The following instructions show how to add an additional storage cluster to existing infrastructure. You can use these instructions as an example and adapt them to your requirements.
To add an additional storage cluster:
cd kuma-ansible-installer
cp expand.inventory.yml.template expand.inventory.yml
Sample expand.inventory.yml inventory file for adding an additional storage cluster
PYTHONPATH="$(pwd)/ansible/site-packages:${PYTHONPATH}" python3 ./ansible/bin/ansible-playbook -i expand.inventory.yml expand.inventory.playbook.yml
Running this command on each target machine specified in the expand.inventory.yml inventory file creates files for creating and installing the storage.
The created set of resources for the storage is displayed in the Resources → Storages section. Now you can create storage services for each ClickHouse cluster node.
This opens the Choose a service window; in that window, select the set of resources that you created for the storage at the previous step and click Create service. Do the same for each ClickHouse storage.
As a result, the number of created services must be the same as the number of nodes in the ClickHouse cluster, that is, fifty services for fifty nodes. The created storage services are displayed in the KUMA web interface in the Resources → Active services section. Now storage services must be installed to each node of the ClickHouse cluster by using the service ID.
The service ID is copied to the clipboard; you need it for running the service installation command.
sudo /opt/kaspersky/kuma/kuma <storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The storage service is installed on the target machine. You can check the status of the service in the web interface under Resources → Active services.
An additional storage cluster is successfully added.
To remove a server from a distributed installation:
sudo /opt/kaspersky/kuma/kuma <collector/correlator/storage> --core https://<KUMA Core server FQDN>:<port used by KUMA Core for internal communication (port 7210 by default)> --id <service ID copied from the KUMA web interface> --install
The service is removed.
The servers are removed from the distributed installation.
To remove one or more storage clusters from a distributed installation:
sudo /opt/kaspersky/kuma/kuma <storage> --id <service ID> --uninstall
Repeat for each server.
The service is removed.
The cluster is removed from the distributed installation.
Preparing the inventory file
When migrating the KUMA Core to a Kubernetes cluster, it is recommended to use the template file named k0s.inventory.yml.template when creating the inventory file.
The kuma_core
, kuma_ collector
, kuma_correlator
, and kuma_storage
sections of your inventory file must contain the same hosts that were used when upgrading KUMA from version 2.0.x to version 2.1 or when performing a new installation of the application. In the inventory file, set the deploy_to_k8s
, need_transfer
and airgap
parameters to true
. The deploy_example_services
parameter must be set to false
.
Example inventory file with 1 dedicated controller and 2 worker nodes
all:
vars:
ansible_connection: ssh
ansible_user: root
deploy_to_k8s: True
need_transfer: True
airgap: True
deploy_example_services: False
kuma:
children:
kuma_core:
hosts:
kuma.example.com:
mongo_log_archives_number: 14
mongo_log_frequency_rotation: daily
mongo_log_file_size: 1G
kuma_collector:
hosts:
kuma.example.com:
kuma_correlator:
hosts:
kuma.example.com:
kuma_storage:
hosts:
kuma.example.com:
shard: 1
replica: 1
keeper: 1
kuma_k0s:
children:
kuma_control_plane_master:
hosts:
kuma2.example.com:
ansible_host: 10.0.1.10
kuma_control_plane_master_worker:
kuma_control_plane:
kuma_control_plane_worker:
kuma_worker:
hosts:
kuma.example.com:
ansible_host: 10.0.1.11
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
kuma3.example.com:
ansible_host: 10.0.1.12
extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true"
Migrating the KUMA Core to a new Kubernetes cluster
When the installer is started with this template file, it searches for the installed KUMA Core on all hosts where you intend to deploy worker nodes of the cluster. The found Core will be moved from the host to within the newly created Kubernetes cluster.
If the component is not detected on the worker nodes, a clean installation of the KUMA Core is performed in the cluster without migrating resources to it. Existing components must be manually rebuilt with the new Core in the KUMA web interface.
Certificates for collectors, correlators and storages will be re-issued from the inventory file for communication with the Core within the cluster. This does not change the Core URL for components.
On the Core host, the installer does the following:
to the following directories:
After you have verified that the Core was correctly migrated to the cluster, these directories can be deleted.
If you encounter problems with the migration, analyze the logs of the core-transfer migration task in the kuma namespace in the cluster (this task is available for 1 hour after the migration).
If you need to perform migration again, you must convert the names of the /opt/kaspersky/kuma/*.moved directories back to their original format.
If an /etc/hosts file with lines not related to addresses in the range 127.X.X.X was used on the Core host, the contents of the /etc/hosts file from the Core host is entered into the CoreDNS ConfigMap when the Core is migrated to the Kubernetes cluster. If the Core is not migrated, the contents of /etc/hosts from the host where the primary controller is deployed are entered into the ConfigMap.