I have a server with 8 disk bays filled with 3 TB disks. Using 4 mirrored vdevs of 2 disks each, this gives me 12 TB of redundant storage.
Here's the issue - I read somewhere that I needed "x GBs of RAM for each TB of deduped data" (paraphrasing). I stupidly took this to mean that if my pool contained mostly data that could not be deduped, it wouldn't use much RAM. To my dismay, it seems that by "deduped data" he meant all the data in the pool that had dedup was enabled.
The result is that my system recently started locking up, presumably due to running out of RAM, and needed to be reset. When I realized my mistake, I thought I could fix it by creating a new dataset with dedup disabled, copy all my data over to the new dataset, then destroying the old dataset. Luckly I've only filled about 35% of my pool. Before attempting this, I disabled dedup on all my datasets.
Unfortunately, any time I attempt to delete something from the old dataset, all 16 threads on my system goes to 100% and all 24 GB of ram is suddenly filled (I see this through htop
) then my system locks up.
Is there any way I can dig myself out of this hole without destroying my entire pool and starting over?