The inventory file may include the following blocks:
all
kuma
kuma_k0s
For each host, you must specify the FQDN in the <host name
>.<domain
> format or an ipv4 or ipv6 IP address. The KUMA Core domain name and its subdomains must not start with a numeral.
Example: hosts: hostname.example.com: ip: 0.0.0.0 or ip: ::%eth0 |
all block
In this block the variables that are applied to all hosts indicated in the inventory are specified, including the implicit localhost where the installation is started. Variables can be redefined at the level of host groups or even for individual hosts.
Example of redefining variables in the inventory file
The following table lists possible variables in the 'vars' section and their descriptions.
List of possible variables in the vars section
Variable |
Description |
Possible values |
---|---|---|
|
Method used to connect to target machines. |
|
|
User name used to connect to target machines and install components. |
If the root user is blocked on the target machines, use a user name that has the right to establish SSH connections and elevate privileges using su or sudo. |
|
Indicates the need to increase the privileges of the user account that is used to install KUMA components. |
|
|
A method for increasing the privileges of the user account that is used to install KUMA components. |
|
|
Path to the private key in the format /<path>/.ssh/id_rsa. This variable must be defined if you need to specify a key file that is different from the default key file: ~/.ssh/id_rsa. |
|
|
Indicates that KUMA components are deployed in a Kubernetes cluster. |
|
|
Indicates that KUMA components are moved in a Kubernetes cluster. |
|
|
Indicates that the installer has completed the firewall configuration steps on the hosts. |
If this setting is not specified in the template, the installer performs the firewall configuration steps on the hosts. |
|
Indicates that the machines are registered in the DNS zone of your organization. In this case, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed. The specified IP addresses must be unique. |
|
|
Indicates the creation of predefined services during installation. |
|
|
Indicates that KUMA is installed in environments with limited computing resources. |
By default, this variable is not present in the inventory file template. If necessary, you can add the |
kuma block
This block lists the settings of KUMA components deployed outside of the Kubernetes cluster.
The following sections are available in the block:
vars
section, you can specify the variables that are applied to all hosts indicated in the kuma
block. children
section you can list groups of component settings:kuma_core
—KUMA Core settings. This may contain only one host. In this section, you can specify the following MongoDB database log rotation settings:mongo_log_archives_number
is the number of previous logs that you want to keep when rotating the MongoDB database log.mongo_log_file_size
is the size of the MongoDB database log, in gigabytes, at which rotation begins. If the MongoDB database log never exceeds the specified size, no rotation occurs.mongo_log_frequency_rotation
is the interval for checking the size of the MongoDB database log for rotation purposes. Possible values:hourly
means the size of the MongoDB database log is checked every hour.daily
means the size of the MongoDB database log is checked every day.weekly
means the size of the MongoDB database log is checked every week.The MongoDB database log is stored in the /opt/kaspersky/kuma/mongodb/log directory.
kuma_collector
—settings of KUMA collectors. Can contain multiple hosts.kuma_correlator
—settings of KUMA correlators. Can contain multiple hosts.kuma_storage
—settings of KUMA storage nodes. Can contain multiple hosts. In this section, you can specify shard, replica, and keeper IDs using the following settings:shard
is the shard ID.replica
is the replica ID.keeper
is the keeper ID.The specified shard, replica, and keeper IDs are used only if you are deploying demo services during a fresh KUMA installation. In other cases, the shard, replica, and keeper IDs that you specified in the KUMA web interface when creating a resource set for the storage are used.
kuma_k0s block
This block defines the settings of the Kubernetes cluster that ensures high availability of KUMA. This block is only available in an inventory file that is based on k0s.inventory.yml.template.
For test and demo installations in environments with limited computational resources, you must also set low_resources: true
at the all
level. In this case, the core
volume is reduced to 4 GB and no limits are set for other computing resources.
Each host in this block must have its unique FQDN or IP address indicated in the ansible_host parameter, except for the host in the kuma_lb section which must have its FQDN indicated. Hosts must not be duplicated in groups.
For a demo installation, you may combine controller with a worker node. Such a configuration does not provide high availability for the Core and is only intended for demonstration of functionality or for testing the software environment.
The minimal configuration to ensure high availability includes 3 dedicated controllers, 2 worker nodes, and 1 load balancer. For industrial operation, it is recommended to use dedicated worker nodes and controllers. If a cluster controller is under workload and the pod with the KUMA Core is hosted on the controller, disabling the controller will result in a complete loss of access to the Core.
The following sections are available in the block:
vars
section, you can specify the variables that are applied to all hosts indicated in the kuma
block.children
section defines the settings of the Kubernetes cluster that ensures high availability of KUMA.The table below shows a list of possible variables in the vars
section and their descriptions.
List of possible variables in the vars
section
Variable group |
Description |
|
---|---|---|
|
FQDN of the load balancer. The user must install the balancer on their own. If the |
|
|
A host that acts as a dedicated primary controller for the cluster. |
Groups for specifying the primary controller. A host must be assigned to only one of them. |
|
A host that combines the role of the primary controller and a worker node of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify |
|
|
Hosts that act as a dedicated cluster controller. |
Groups for specifying secondary controllers. |
|
Hosts that combine the role of controller and worker node of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify |
|
|
Worker nodes of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify |