Kaspersky Anti Targeted Attack (KATA) Platform

Purging hard drives on storage servers

June 27, 2024

ID 275821

If you have a cluster deployed on servers and want to add more hard drives to these servers or replace some of the existing drives and then reinstall the cluster, you must purge the drives previously allocated for the OSD (Object Storage Daemon) on the storage servers before installing components. Otherwise, the application is not guaranteed to work correctly.

To purge the disks allocated for OSD on a live storage server:

  1. Sign in to the management console of the server where you want to purge the disks over SSH or through the terminal.
  2. Stop the OSD starter service by running sudo systemctl stop kata-osd-starter.service.
  3. Stop OSD containers by running sudo docker ps --filter name=osd -q | xargs docker stop.
  4. Get a list of OSD disks by running sudo ceph-volume --cluster ceph lvm list | grep devices.
  5. Purge these disks by running sudo ceph-volume lvm zap --destroy /dev/<disk name>.

    You must run this command for each drive that you got at step 4. For example: sudo ceph-volume lvm zap --destroy /dev/sda.

The OSD daemon is removed from the disks.

If the server is not live, you must delete the information about volume groups from each disk allocated for the OSD.

To delete the information about volume groups from each disk allocated for the OSD on a non-live server:

  1. Start the server with the alternative operating system.
  2. Get group IDs for each disk allocated for the OSD using the sudo pvs command.

    This command outputs a table where PV are physical volumes, VG indicates logical group membership, Fmt indicates the volume format, and Size indicates the physical volume size.

  3. Remove the relevant volume groups by running sudo vgremove <volume group ID>.

Information about volume groups on disks allocated for OSD is deleted.

Did you find this article helpful?
What can we do better?
Thank you for your feedback! You're helping us improve.
Thank you for your feedback! You're helping us improve.