-
Antonio Falabella authoredAntonio Falabella authored
Check mon status from socket
sudo ceph --admin-daemon /var/run/ceph/ceph-mon.qn-cnfslhc.asok config help
sudo ceph --admin-daemon /var/run/ceph/ceph-mon.qn-cnfslhc.asok config show
sudo ceph --admin-daemon /var/run/ceph/ceph-mon.qn-cnfslhc.asok
Enable msgr2
Enable msgr2 in nautilus
ceph mon enable-msgr2
OSD remove
$ ceph osd out osd.<ID>
$ systemctl stop osd.<ID>
$ ceph osd down osd.<ID>
$ ceph osd purge osd.<ID>
$ ceph auth rm osd.<ID>
Disk clean up
check spurious partitions with lsblk
and then remove them with
dmsetup remove <partition hash>
for i in `cat ~/disks.txt`;do id=$(lsblk $i | grep ceph | awk '{print $1;}') && id_c=${id:2} && dmsetup remove $id_c ;done
Zap the the disk with gdisk
gdisk /dev/sdbe < input
where input is a file with this content:
x
z
y
y
Pools
ceph osd pool ls
ceph osd pool delete <pool> <pool> --yes-i-really-really-mean-it
OSD PG recreate
This is a destructive operation, to recover a severely degraded cluster. Retrieve the list of inactive PGs for a given :
ceph pg dump_stuck inactive | grep <state>
for i in `ceph pg dump_stuck inactive | grep <state> | awk '{print $1}'`;do ceph osd force-create-pg $i --yes-i-really-mean-it;done
BLUEFS_SPILLOVER BlueFS spillover detected
Sometimes the metadata can spillover to spinning disks. The way to solve this is to scrub the OSDs:
ceph osd scrub <id>
As a reference from the documentation:
Data Scrubbing: As part of maintaining data consistency and cleanliness, Ceph OSD Daemons can scrub objects within placement groups. That is, Ceph OSD Daemons can compare object metadata in one placement group with its replicas in placement groups stored on other OSDs. Scrubbing (usually performed daily) catches bugs or filesystem errors. Ceph OSD Daemons also perform deeper scrubbing by comparing data in objects bit-for-bit. Deep scrubbing (usually performed weekly) finds bad sectors on a drive that weren’t apparent in a light scrub. See Data Scrubbing for details on configuring scrubbing.