You can make KUMA high availability by deploying KUMA Core on a Kubernetes cluster and by using an external TCP traffic balancer.
To create a high availability KUMA installation, use the kuma-ansible-installer-ha-<build number>.tar.gz installer and prepare the k0s.inventory.yml inventory file by specifying the configuration of the cluster. For a new installation in a high availability configuration, OOTB resources are always imported. You can also perform an installation with demo services deployment. To do this, specify the deploy_example_services: true setting in the inventory file.
The KUMA Core can be placed in a Kubernetes cluster in the following ways:
Minimum configuration
There are 2 possible roles for nodes in Kubernetes:
To perform a high availability installation of KUMA, you will need:
The balancer must not be used as a test machine for running the KUMA installer.
To ensure the adequate performance of KUMA Core in Kubernetes, you must allocate 3 dedicated nodes with a single controller role. This will provide high availability for the Kubernetes cluster and will ensure that the workload (KUMA processes and other processes) cannot affect the tasks associated with managing the Kubernetes cluster. If you are using virtualization tools, make sure that the nodes are located on different physical servers and that these physical servers do not act as worker nodes.
For a demo installation of KUMA, you may combine the controller and working node roles. However, if you are expanding an installation to a distributed installation, you must reinstall the entire Kubernetes cluster and allocate 3 dedicated nodes with the controller role and at least 2 nodes with the worker node role. KUMA cannot be upgraded to later versions if any of the nodes combine the controller and worker node roles.