I am new to Ceph, and have only used it in my homelab with 3 nodes of 2 osd's each. After reading about Nautilus and pg_num autoscaling I enabled this, but this was probably a mistake. now my cluster have this status. Someone have tip on how to get past this?
Ceph status
cluster:
id: b512a8d7-1956-4ef3-aa3e-6f24d08878cf
health: HEALTH_WARN
Reduced data availability: 256 pgs inactive
services:
mon: 3 daemons, quorum ce01,ce03,ce02 (age 17m)
mgr: ce02(active, since 48m), standbys: ce03, ce01
mds: cephfs:1 {0=ce03=up:active} 2 up:standby
osd: 6 osds: 6 up (since 17m), 6 in (since 5d)
data:
pools: 3 pools, 288 pgs
objects: 24 objects, 4.8 MiB
usage: 683 GiB used, 16 TiB / 16 TiB avail
pgs: 88.889% pgs unknown
256 unknown
32 active+clean