Recovering Ceph quorumΒΆ

Ceph relies on Paxos to maintain a quorum among monitor services so that they agree on cluster state. In some cases Ceph can lose quorum, such as when hosts are added and removed from the cluster in quick successtion, without removing the old hosts from Ceph (see Adding/Removing Hosts).

A telltale sign of quorum loss is when querying cluster health, ceph -s times out with monitor faults on every host in the cluster.

Important

Ceph refusing to do anything when it has lost quorum is a safety precaution to prevent you from losing data. Attempting to recover from this situation requires knowledge about the state of your cluster, and should only be attempted if data loss is not considered catastrophic (such as when a recent backup is available). When in doubt, consult the Ceph and Deis communities for assistance. Deis recommends regular backups to minimize impact should an issue like this occur. For more information, see Backing Up and Restoring Data.

The instructions below are intentionally vague, as each recovery scenario will be unique. They are intended only to point users in the right direction for recovery.

To recover from Ceph quorum loss:

  1. Suspect quorum loss because ceph -s shows nothing but timeouts and/or monitor faults
  2. Using store-admin, use the Ceph admin socket to query the mon status, identifying that there are enough stale entries to prevent Ceph from gaining quorum
  3. Stop the platform with deisctl stop platform so components stop trying to write data to store (note that instead, manually stopping all components except router will allow application containers to remain up, unaffected)
  4. Clean up stale entries in /deis/store/hosts so that dead monitors are not written out to clients
  5. Update /deis/store/monSetupLock to point to the healthy monitor – note that this isn’t strictly necessary, as this value is only used if wiping clean and starting a fresh cluster from scratch with no data, but it’s good cleanup
  6. Start the healthy monitor and use the admin socket to get the current state of the cluster.
  7. Given the cluster state as the monitor sees it, use monmaptool to manually remove stale monitor entries from the monmap (i.e. monmaptool --rm mon.<hostname> --clobber /etc/ceph/monmap)
  8. Stop the healthy monitor and use deis-store-admin to inject the prepared monmap into the monitor with ceph-mon -i <hostname> --inject-monmap /etc/ceph/monmap
  9. Start the monitor and ensure it achieves quorum by itself (use ceph -s and/or query mon_status on the admin socket)
  10. Start the other monitors and ensure they connect
  11. Start the OSDs with deisctl start store-daemon
  12. Observe the OSD map with ceph osd dump – for each OSD that is no longer with us, follow Removing an OSD – take care to ensure that the data is relocated (watch the health with ceph -w) before marking another OSD as out
  13. Once the OSD map reflects the now-healthy OSDs, start the remaining store services in order: deisctl start store-metadata and deisctl start store-gateway
  14. Confirm that the cluster is healthy with the metadata servers added, and then start store-volume with deisctl start store-volume.
  15. Start the remaining services with deisctl start platform