The main problem with a large cache is the full GC time. To give you an idea it might be 1 seconds per GB (This varies from application to application) If you have a 20 GB cache and your application pauses for 20 seconds every so often is that acceptable?
As a fan of direct and memory mapped files I tend to think in terms of when would I not both to put the data off heap, and just use the heap for simplicity. ;) Memory mapped files have next to no impact on the full GC time regardless of size.
One of the advantages of using a memory mapped file is it can be much larger than your physical memory and still perform reasonably well. This leaves the OS to determine which portions should be in memory and what needs to be flushed to disk.
BTW: Having a faster SSD also helps ;) The larger drives also tend to be faster as well. Check for the IOPs they can perform.
In this example, I create an 8 TB file memory mapped on a machine with 16 GB. http://vanillajava.blogspot.com/2011/12/using-memory-mapped-file-for-huge.html
Note, it performs better in the 80 GB file example, 8 TB is likely to be over kill. ;)