How to deploy a distributed version of Kaspersky Unified Monitoring and Analysis Platform
Show applications and versions that this article concerns
- Kaspersky Unified Monitoring and Analysis Platform 3.0.3
- Kaspersky Unified Monitoring and Analysis Platform 3.0.2
To deploy a distributed version of Kaspersky Unified Monitoring and Analysis Platform (KUMA) 3.0.2 and higher, in this article as an example, we have used a virtual machine with the minimum number of required resources and the following characteristics:
Title | IP | Role | Server characteristics |
---|---|---|---|
kuma.some.local | 10.68.76.13 | Core |
|
storage01.some.local | 10.68.76.14 | Storage |
|
storage02.some.local | 10.68.76.15 | Storage |
|
collector01.some.local | 10.68.76.16 | Collector |
|
correlator01.some.local | 10.68.76.17 | Correlator |
|
Step 1. Install the OS
In this article, we used the OS Astra Linux 1.7.4 with the advanced security level: Voronezh for KUMA deployment.
- Perform the recommended disc partitioning:
- Core:
- Storage:
- Collector and Сorrelator:
Where:
- / is the operating system
- /home is intended for the KUMA distribution package, configuration files and user data
- /opt is the directory where KUMA and its components will be installed
- Select the check boxes next to the required software:
- Base packages
- Ufw firewall
- SSH server
- Select the advanced security level: Voronezh.
Step 2. Set up the network configuration
- Edit the /etc/network/interfaces file using the vi editor and the command:
- Add the lines below and specify the server address, network mask and gateway for your organization. Example:
iface eth0 inet static
address 10.68.76.12
netmask 255.255.255.0
gateway 10.68.76.1
- Connect to the server via the SSH protocol after configuring the network.
Step 3. Perform preliminary works.
For correct installation, it is necessary that the hostname -f command returns the full server names with the prefix (in this case, the name of the astra server).
- Run the commands on the server with one of the roles (Storage, Collector, or Correlator), depending on the data in the command:
- Use the following command to make sure that the DNS server is specified in the settings of the resolv.conf file if KUMA servers are defined on it:
# vi /etc/resolv.conf
Otherwise, change the file configuration to the required one. In this example, the server 10.68.138.2 is used:
- Add mapping of the server IP address with its FQDN to /etc/hosts on all the KUMA servers if they are not defined on DNS.
Example of the command:
- Add the necessary packages required for KUMA installation using the command:
- Install the chrony tool to synchronize the time with an external or internal NTP server using the command:
- Configure chrony using the /etc/chrony.conf file according to the guide from Astra Linux Help Center.
- Wait for the process to complete and check results using the command:
If you need to install KUMA under another user account, proceed to item 10 of this step. Then according to step 5 item 4, enter the required account in the distributed.inventory.yml.template configuration file.
- Allow access via the SSH protocol on all KUMA servers using the command:
Once the command is entered, the changes will be implemented to the /etc/ssh/sshd_config configuration file, the SSH service will be restarted, and the password to the root account will be reset.
- Run the following command on the server from which the KUMA installation package is to be run:
Step 4. Generate and distribute an SSH key
To correctly deploy KUMA from the target server where the installation package will be run (in this case, the installation is carried out from the server with the Core role), you need access to the other servers via the SSH protocol. Therefore, you must generate an SSH key and distribute it to all servers, including the Core server.
- Generate a private key using the command:
- Distribute the key to all servers, including the one where it has been generated (in this case, it is the target server) by using the commands:
Where 12.34.56.ХХ is the IP address of your servers with the Storage, Correlator, and Collector roles.
Step 5. Install the distributed KUMA version
- Copy the installation package to the user's home directory (in this case, /home/ka) on the server with the Core role and extract the files from it using the command:
- Open the kuma-ansible-installer folder that was generated in the home directory.
- In the kuma-ansible-installer folder, create a copy of the distributed.inventory.yml.template configuration file using the command:
- Edit the file.
The final distributed.inventory.yml file will be displayed as follows:- For the root account:
- For another account:
The settings will be different and they will match those you configured.
- Copy your license to the /home/ka/kuma-ansible-installer/roles/kuma/files directory and make sure to rename it to license.key using the command:
- Navigate to the previously extracted kuma-ansible-installer directory and run the KUMA installation using the command:
The average installation time is 5 minutes.
- Open the interface: https://kuma.some.local:7220/
If the KUMA server name is not listed in the DNS of the organization, add the address and name mapping to the hosts file or log in via the IP address: https://10.68.76.13:7220.
The following credentials are used by default:
- Login: admin
- Password: mustB3Ch@ng3d!
The distributed KUMA version will be installed.
Step 6. Create a Storage
- Open KUMA, go to the Resources section and select Storages.
- Click Add.
- Fill in the mandatory fields on the Basic settings tab.
- Fill in the mandatory fields for the node of the ClickHouse cluster.
If you need to create more that one node, click Add node and fill in the mandatory fields.
- Check if the fields are filled in correctly and click Create new.
- Go to the Resources section and select Active services.
- Click Add service.
- Select the created storage and click Create service.
- Open the context menu of the created storage and select Copy ID.
- Go to the server with the Storage role via the SSH protocol and run the command for the first storage node by switching the Core server to yours.
Where 123b4ed5-6e78-9101-a234-ff56789a0a12 is the ID of your first storage.
- Repeat the items 9 and 10 for configuring the second storage:
Where 123b4ed5-6e78-9101-a234-ff56789a0a12 is the ID of your second storage.
If the services are successfully created, they will have green statuses.
Step 7. Create a Correlator
- Go to the Resources section and select Correlators.
- Click Add.
- In the General section, type the correlator name in the Name field.
- In the Tenant drop-down list, select Main.
- Go to the Routing section and click Add.
- In the Kind drop-down list, select storage.
- In the URL drop-down list, select the first created storage.
- Click Add and select the second created storage in the URL drop-down list.
- Click Save.
- Go to the Setup validation section and click Create and save service.
- Copy the command for correlator installation that was generated in the Setup validation section.
- Type sudo in the beginning of the command and run it on the server with the Correlator role.
Where f123ee45-6ff7-8ace-91d2-3b4d56d7f8f9 is the ID of your correlator.
If the command successfully completes, the correlator service will have a green status.
Step 8. Create a Collector
In this example, we have deployed a collector based on existing services, for example, [OOTB] Syslog.
- Go to Resources, select Active services and click Add service.
- Select the check box next to the existing [OOTB] Syslog collector.
A new [OOTB] Syslog service will appear with a red status in the services list.
- Open the collector, go to the Setup validation section and copy the generated command for collector installation.
- Type sudo in the beginning of the copied command and run it on the server with the Collector role.
Where 12c3e4f5-6b78-90ca-a1e2-3456f7890d1d is the ID of your collector.
If the command successfully completes, the [OOTB] Syslog collector service will have a green status.