The same question has been discussed here. Posting the relevant answers
I think we’ve meant to change the 4GB to 8GB, as that’s the bar that
most knobs have been configured to meet, and how most folk are running
it in production. And that’s what this guidance is meant for. The
default settings of FDB are such that in most reasonable uses, FDB
won’t run out of memory and die even if you subject it to a high
degree of load if you have 8GB of memory per fdbserver process.
FDB isn’t meant to be a low memory usage database. It’s generally hard
to acquire CPU and disk without also acquiring memory, and if the
memory is available, then it’s better to use it than leave it unused.
8GB per core/disk is a sort of reasonable guess at the average ratios
in a server. There’s various components that are (by default) happy to
hold 1-2GB of data in memory. Notably, this is transaction logs
holding the most recent N seconds of mutations in memory, or storage
servers keeping a ~2GB page cache.
If you only have 1GB with SSD mode, then because the page cache
defaults to 2GB, you’ll OOM as soon as your total data volume is >1GB,
and you read all of it once. The memory storage engine has its own
limits, which you can adjust with --storage_memory.
Another answer
I don’t remember all the history here, but it’s possible the 4GB
requirement was meant to represent a rough idea of what you might need
to install the packages on a machine and run some simple workloads
against the resulting cluster. I don’t think it’s required to have the
full 8GB for simple cases like that. For production, though, I agree
that the listed requirement should probably be higher.