0

Currently I am using replication for data placement and think I am using three copies, which I think is the default. How do I change ceph config to store 4 copies on different nodes in different chassis? Also, would this change impact anything existing on ceph?

Thanks, Kampton

1 Answers1

0

To increase the number of replicas you can set the pool size according to your requirements:

ceph osd pool set size 4

The placement of the copies (chassis) is called failure domain. This is configured in the ruleset you're using. You can change the ruleset for a given pool:

# get current ruleset for given pool
ceph osd pool get iscsi-pool crush_rule 
crush_rule: replicated_rule

# dump ruleset
ceph osd crush rule dump replicated_rule

The docs also describe how to change a crush rule and modify the crushmap. Changing the data placement will cause a remapping of the PGs, depending on your ceph version it will remap 5% of misplaced PGs at max. The remapping process can be controlled with these osd config settings:

osd_recovery_max_active
osd_max_backfills

Set them to higher values to increase the recovery speed, but set them back to the defaults after you're finished.

eblock
  • 417
  • 2
  • 6