Configuring the cluster using Pacemaker

You can deploy and configure a two-node high-availability cluster for the user identity service using the Pacemaker utility.

To deploy and configure a high-availability cluster using Pacemaker:

  1. Connect to the primary node.
  2. Install the Pacemaker utility and applications for managing the cluster by running the following command:

    sudo apt install pacemaker pcs astra-resource-agents docker.io -y

  3. Set the password for the hacluster user:
    1. Switch to the user password change mode by running the following command:

      sudo passwd hacluster

      You will be prompted to enter the user password.

    2. Enter a password for the hacluster user and confirm it.
  4. Delete the cluster configuration files by running the following command:

    sudo pcs cluster destroy

  5. Switch to the backup node and complete all the steps described above.
  6. Switch to the primary node to deploy the cluster.
  7. Create a two-node cluster by running the following commands:

    sudo pcs host auth ha-astra-1 addr=<IP address of the primary node> ha-astra-2 addr=<IP address of the backup node> -u hacluster

    sudo pcs cluster setup uawscluster ha-astra-1 ha-astra-2 --force

  8. Start the created cluster and configure it by running the following commands:

    sudo pcs cluster start --all

    sudo pcs property set stonith-enabled=false

  9. Create the ClusterIP cluster resource with the IP address of the high-availability cluster by running the following command:

    sudo pcs resource create ClusterIP ocf:heartbeat:IPaddr2 ip=<IP address of the user identity service> cidr_netmask=23 op monitor interval=5s

  10. Create the HA-PGSQL cluster resource for using PostgreSQL databases from the cluster by running the following commands:

    sudo pcs resource create HA-PGSQL ocf:heartbeat:pgsql \

    pgctl="/usr/lib/postgresql/15/bin/pg_ctl" \

    psql="/usr/lib/postgresql/15/bin/psql" \

    pgdata="/var/lib/postgresql/15/main" \

    config="/etc/postgresql/15/main/postgresql.conf" \

    rep_mode="sync" node_list="ha-astra-1 ha-astra-2" \

    master_ip="<IP address of the user identity service>" check_wal_receiver="true"

    sudo pcs resource promotable HA-PGSQL promoted-max=1 promoted-node-max=1 clone-max=2 clone-node-max=1 notify=true

  11. Create the 'main-group' group and add the created ClusterIP and HA-PGSQL resources to it so that they always run together on the same node by running the following commands:

    sudo pcs resource group add main-group ClusterIP

    sudo pcs constraint colocation add main-group with Promoted HA-PGSQL-clone

    sudo pcs constraint order promote HA-PGSQL-clone then start main-group symmetrical=false kind=Mandatory

    sudo pcs constraint order demote HA-PGSQL-clone then stop main-group symmetrical=false kind=Optional

  12. Restart the cluster resources:

    sudo pcs resource cleanup

  13. Create resources for the user identity service components and add them to main-group group:
    • For the Collector component, create the uaws-collect resource by running the following commands:

      sudo pcs resource create uaws-collector \

      ocf:heartbeat:docker \

      image="uaws-collector"

      name="uaws-collector" \

      run_opts="--hostname collector --sysctl net.ipv6.conf.all.disable_ipv6=1 --dns <DNS server> -v /var/lib/uaws/collector:/uaws/ -e AGENT_CONFIG=/uaws/ collector_config.yml -e AGENT_KEY_STORE=/uaws/ssl/uaws.p12 -e AGENT_KEY_STORE_PASS=123456 -e AGENT_TRUST_STORE=/uaws/ssl/ca.p12 -e AGENT_TRUST_STORE_PASS=<password>"

      sudo pcs constraint colocation add main-group with Promoted uaws-collector

    • For the Map component, create the uaws-mapapp resource by running the following commands:

      sudo pcs resource create uaws-mapapp \

      ocf:heartbeat:docker \

      image="uaws-mapapp" \

      name="uaws-mapapp" \

      run_opts="--hostname mapapp --sysctl net.ipv6.conf.all.disable_ipv6=1 -v /var/lib/uaws/mapapp:/uaws/ -p 8443:8443 -e USERMAP_CONFIG=/uaws/mapapp_config.yml -e USERMAP_KEY_STORE=/uaws/ssl/uaws.p12 -e USERMAP_KEY_STORE_PASS=123456 -e USERMAP_TRUST_STORE=/uaws/ssl/ca.p12 -e USERMAP_TRUST_STORE_PASS=<password>"

      sudo pcs constraint colocation add main-group with Promoted uaws-mapapp

    • For the GroupApp component, create the uaws-groupapp resource by running the following commands:

      sudo pcs resource create uaws-groupapp ocf:heartbeat:docker \

      image="uaws-groupsapp" \

      name="uaws-groupapp" \

      run_opts="--hostname groupapp --sysctl net.ipv6.conf.all.disable_ipv6=1 -v /var/lib/uaws/groupapp:/uaws/ -p 8444:8443 -e GROUPAPP_CONFIG=/uaws/groupapp_config.yml -e GROUPAPP_KEY_STORE=/uaws/ssl/uaws.p12 -e GROUPAPP_KEY_STORE_PASS=123456 -e GROUPAPP_TRUST_STORE=/uaws/ssl/ca.p12 -e GROUPAPP_TRUST_STORE_PASS=<password>"

      sudo pcs constraint colocation add primary-group with Promoted uaws-groupapp

  14. Restart the cluster resources:

    sudo pcs resource cleanup

The two-node cluster is deployed, and user identity service components are configured for it,

To test the switching to the backup node:

  1. Power off the primary node.
  2. Switch to the backup node.
  3. On the command line, check if the PostgreSQL cluster on the backup node is running in 'online' mode by running the following command:

    pg_lsclusters

In the output, the Status indicates the online status:

Ver Cluster Port Status Owner Data directory Log file

15 main 5432 online postgres /var/lib/postgresql/15/main pg_log/postgresql-%a.log

Component containers are also started.

The deployment of the user identity service in a high-availability cluster is complete. You can use the user identity functionality in Kaspersky NGFW.

Page top