KUMA settings in the inventory file

The inventory file may include the following blocks:

For each host, you must specify the FQDN in the <host name>.<domain> format or an ipv4 or ipv6 IP address.

Example:

hosts:

hostname.example.com:

ip: 0.0.0.0

or

ip: ::%eth0

all block

In this block the variables that are applied to all hosts indicated in the inventory are specified, including the implicit localhost where the installation is started. Variables can be redefined at the level of host groups or even for individual hosts.

Example of redefining variables in the inventory file

The following table lists possible variables in the 'vars' section and their descriptions.

List of possible variables in the vars section

Variable

Description

Possible values

ansible_connection

Method used to connect to target machines.

  • ssh—connection to remote hosts via SSH.
  • local—no connection to remote hosts is established.

ansible_user

User name used to connect to target machines and install components.

If the root user is blocked on the target machines, use a user name that has the right to establish SSH connections and elevate privileges using su or sudo.

ansible_become

Indicates the need to increase the privileges of the user account that is used to install KUMA components.

true if the ansible_user value is not root.

ansible_become_method

A method for increasing the privileges of the user account that is used to install KUMA components.

su or sudo if the ansible_user value is not root.

ansible_ssh_private_key_file

Path to the private key in the format /<path>/.ssh/id_rsa. This variable must be defined if you need to specify a key file that is different from the default key file: ~/.ssh/id_rsa.

 

deploy_to_k8s

Indicates that KUMA components are deployed in a Kubernetes cluster.

  • false

    is the default value for the single.inventory.yml and distributed.inventory.yml templates.
  • true

    is the default value for the k0s.inventory.yml template.

need_transfer

Indicates that KUMA components are moved in a Kubernetes cluster.

  • false

    is the default value for the single.inventory.yml and distributed.inventory.yml templates.
  • true

    is the default value for the k0s.inventory.yml template.

airgap

Indicates that there is no internet connection.

true

is the default value for the k0s.inventory.yml template.

no_firewall_actions

Indicates that the installer has completed the firewall configuration steps on the hosts.

  • true

    — when the installer is started, the firewall configuration steps on the hosts are not performed.
  • false — default value in all templates. The installer performs the firewall configuration steps on the hosts.

If this setting is not specified in the template, the installer performs the firewall configuration steps on the hosts.

generate_etc_hosts

Indicates that the machines are registered in the DNS zone of your organization.

In this case, the installer will automatically add the IP addresses of the machines from the inventory file to the /etc/hosts files on the machines where KUMA components are installed. The specified IP addresses must be unique.

  • false.
  • true.

deploy_example_services

Indicates the creation of predefined services during installation.

  • false: no services are needed. The default value for the distributed.inventory.yml and k0s.inventory.yml templates.
  • true: services must be created. The default value for the single.inventory.yml template.

low_resources

Indicates that KUMA is installed in environments with limited computing resources. In this case, the Core can be installed on a host that has 4 GB of free disk space. By default, there is no variable.

 

kuma block

This block lists the settings of KUMA components deployed outside of the Kubernetes cluster.

The following sections are available in the block:

kuma_k0s block

This block defines the settings of the Kubernetes cluster that ensures high availability of KUMA. This block is only available in an inventory file that is based on k0s.inventory.yml.template.

Each host in this block must have its unique FQDN or IP address indicated in the ansible_host parameter, except for the host in the kuma_lb section which must have its FQDN indicated. Hosts must not be duplicated in groups.

For a demo installation, you may combine controller with a worker node. Such a configuration does not provide high availability for the Core and is only intended for demonstration of functionality or for testing the software environment.

The minimum configuration for high availability must include 3 dedicated controllers, 2 worker nodes, and 1 load balancer. For industrial operation, it is recommended to use dedicated worker nodes and controllers. If a cluster controller is under workload and the pod with the KUMA Core is hosted on the controller, disabling the controller will result in a complete loss of access to the Core.

The following sections are available in the block:

The table below shows a list of possible variables in the vars section and their descriptions.

List of possible variables in the vars section

Variable group

Description

kuma_lb

FQDN of the load balancer.

The user installs the balancer on their own.

If the kuma_managed_lb = true parameter is indicated within the group, the load balancer will be automatically configured during KUMA installation, the necessary network TCP ports will be opened on its host (6443, 8132, 9443, 7209, 7210, 7220, 7222, 7223), and a restart will be performed to apply the changes.

kuma_control_plane_master

A host that acts as a dedicated primary controller for the cluster.

Groups for specifying the primary controller. A host must be assigned to only one of them.

kuma_control_plane_master_worker

A host that combines the role of the primary controller and a worker node of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true".

kuma_control_plane

Hosts that act as a dedicated cluster controller.

Groups for specifying secondary controllers.

kuma_control_plane_worker 

Hosts that combine the role of controller and worker node of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true".

kuma_worker 

Worker nodes of the cluster. For each cluster controller that is combined with a worker node, in the inventory file, you must specify extra_args: "--labels=kaspersky.com/kuma-core=true,kaspersky.com/kuma-ingress=true,node.longhorn.io/create-default-disk=true".

Page top