KUMA Core availability under various scenarios:
Access to the KUMA web interface is lost. After 6 minutes, Kubernetes initiates migration of the Core bucket to an operational node of the cluster. After deployment is complete, which takes less than one minute, the KUMA web interface becomes available again via URLs that use the FQDN of the load balancer. To determine on which of the hosts the Core is running, run the following command in the terminal of one of the controllers:
k0s kubectl get pod -n kuma -o wide
When the malfunctioning worker node or access to it is restored, the Core bucket is not migrated from its current worker node. A restored node can participate in replication of a disk volume of the Core service.
Access to the KUMA web interface is not lost via URLs that use the FQDN of the load balancer. The network storage creates a replica of the running Core disk volume on other running nodes. When accessing KUMA via a URL with the FQDN of running nodes, there is no disruption.
Worker nodes operate in normal mode. Access to KUMA is not disrupted. A failure of cluster controllers extensive enough to break quorum leads to the loss of control over the cluster.
Correspondence of the number of machines in use to ensure high availability
Number of controllers when installing a cluster |
Minimum number of controllers required for the operation of the cluster (quorum) |
Admissible number of failed controllers |
---|---|---|
1 |
1 |
0 |
2 |
2 |
0 |
3 |
2 |
1 |
4 |
3 |
1 |
5 |
3 |
2 |
6 |
4 |
2 |
7 |
4 |
3 |
8 |
5 |
3 |
9 |
5 |
4 |
The cluster cannot be managed and therefore will have impaired performance.
Access to the KUMA web interface is lost. If all replicas are lost, information will be lost.