The downside of caches is that they don't survive crashes and power outages, so file systems try very hard to get a clear picture of what has been actually written to disk, and schedule writes so that after a crash it can get back to a consistent state quickly.
What this means in practice is that a file system will submit a journal entry first, wait for the report that it has been successfully written, then submit the data, wait for the report, then submit another journal entry marking the data as valid. Any crash before the last bit is done will lead to a state where the new data doesn't show up.
In most setups, that is the bottleneck for small accesses, not raw throughput, and the only way to get out of that is to make the cache write count as a persistent write, i.e. the cache acknowledges the write, and then takes over responsibility for delivering the data to the disks.
For that to work, the cache needs to be somewhat crash proof, usually this is handled by implementing it in a separate controller, so its state is kept if the main computer crashes.
For reads, caching is safe and is already performed by your OS, so there isn't much to be gained here.