I work for a large internet site where people can share large pieces of text with each other. We store most information in InnoDB databases, but the actual pieces of text are stored somewhere in text based files. These text based files vary between a few KB up to 10MB each. There are millions of these files, and we have setup a good folder/file structure so that there are never too many files in one folder.
The webserver (DB is on another one) where these files are store is a powerful machine with 4x 15k SAS drives in RAID10 and has 24GB ram. We run Nginx as a webserver and Xcache to speed up PHP. This all works perfectly, and the load usually varies between 0.7 and 1.5.
Now, I only plan on using Memcached for storing the 'text based' files in the ram. So instead of having to read them from disk every time someone requests a page where one of these 'text based' files has to be loaded. In PHP I use file_get_contents(); to load the 'text based' files into a variable, and then display it somewhere on the page afterwards.
My question is, would it actually lower the load you think implementing Memcached for this particular feature, or does Linux already have some kind of internal file caching by itself that is used when I request a certain file via file_get_contents very often?