12

I want to set up Nginx as my web server. I want to have image files cached in the memory (RAM) rather than disk. I am serving a small page and want a few images always served from RAM. I don't wish to use Varnish (or any other such tools) for this as I believe Nginx has a capability to cache contents into RAM. I am not sure as to how I can configure Nginx for this? I did try a few combinations but they didn't work. Nginx uses disk all the time to get the images.

For example, when I tried Apache benchmark to test with following command:

ab -c 500 -n 1000 http://localhost/banner.jpg

I get the following error:

socket: Too many open files (24)

I guess this means Nginx is trying to open too many files simultaneously from the disk and OS is not allowing this operation. Can anyone please suggest me a correct configuration?

kasperd
  • 30,455
  • 17
  • 76
  • 124
Vijayendra
  • 245
  • 1
  • 2
  • 6
  • You're almost certainly guessing wrong. Also, where are you getting the error from? – womble Jun 11 '12 at 00:28
  • @womble As I comment below problem could be concurrency. There are many threads trying simultaneously before the content is available in memory. Can you please explain why do think I am guessing it wrong? – Vijayendra Jun 11 '12 at 18:17

5 Answers5

9

If it's static content then it will be cached in memory by default (unless there isn't any memory left), just not by nginx, but OS - all that will be left disk-side will be stat().

If you want 100% memory solution you can just configure ramdisk and serve data from there.

c2h5oh
  • 1,489
  • 10
  • 13
  • Yes, I understand that contents are cached in memory by OS. But the assumption is that file is already available in memory. Problem could be concurrency. There are many threads trying simultaneously before the content is available from memory. What do you think? – Vijayendra Jun 11 '12 at 18:10
  • 1
    That's only a problem if the number of requests goes from 0 to 500 in an instant - once the file is read it will be stored there until the memory is needed (least recently used file is removed from cache if memory is full and new file needs to be cached). Odds of that happening in real life are slim to none. If you are sure that's a real risk use the ramdisk solution instead, just make sure to restore the data on ramdisk after each reboot - ramdisk isn't persistent storage by definition. – c2h5oh Jun 11 '12 at 19:22
  • Ok, I will try ramdisk and see how that works. Thanks for the feedback. – Vijayendra Jun 12 '12 at 17:25
9

Once the server has read a file from the disk, it will be cached to ram (and will be replaced by diffrent file if you are out of ram), the problem is with your account limit, you can't open so much files (run 'ulimit -a'),

If you want to change this limit - read about /etc/security/limits.conf

Avi Keinan
  • 106
  • 3
  • 1
    You are right, open file limit was a problem as the limit is 256 on my system. But I still need to figure out how to configure Nginx to read files from memory rather then disk. I dont wish to change my open file limit and rather wish to achieve cache usage for subsequent requests. – Vijayendra Jun 12 '12 at 17:36
  • 4
    "I don't wish to change my open file limit" -- you're doing it wrong. Oh, so very, very wrong. – womble Jun 13 '12 at 04:31
  • 4
    @womble I understand to fix the issue the easiest solution is to change the limit. The reason I dont want to change this limit is to see what best I can do with configuration without changing the limit. I thing I can improve things by changing some other aspects of caching and configuration. And by the way, can you please give you reasons/suggestions while claiming why somebody is wrong. Even in your first comment you did this without giving any reason. This is a place to give professional suggestions, please stop treating it as your facebook wall. – Vijayendra Jun 13 '12 at 17:13
  • People constantly keep telling me this, but this is just not true. I run a (debian 11) server that has 30 GB of RAM, it only currently uses 5 GB of that 30. I do all kinds of things to try and have debian use more of that RAM, but it obviously is not doing this. It's definitely not caching and replacing as soon as it's fully used, because it's not. The only proper way to use all the RAM is if I store data directly into RAM using a RAMdisk. And even there, most people then say "yeah but linux already offers a ramdisk by default, it's called tmpfs" but that too is not true. – Julius Jul 23 '23 at 09:25
7

So I know this is really old but, here goes.

  1. Nginx doesn't do memory caching out of the box, you'll want to look at memcache for this, I would recommend the openresty pack for this: http://openresty.org/. What you do get out of the box(as was answered above is page cache)
  2. That error message, I'm almost certain, is from ab not from nginx, nginx errors for file limits look like this "failed (24: Too many open files)". Remember in unix sockets are files too, so for the user you are running ab as, needs to have its ulimit adjusted for that session to run ab. Since you said your limit was 256, you are asking ab to use 500 connections, this is maxing out your limit.
Ben Whitaker
  • 79
  • 1
  • 2
  • Ben Whitaker was right, the issue came from ab, which try to create 500 sockets <=> files (in linux os). – bachden Feb 08 '15 at 19:47
2

A file cached in RAM is still a file !

Try to use Memcached cache module of Nginx instead. But still, 1000 concurrent connections is huge, do you think is your case?

Maxime
  • 195
  • 1
  • 5
Thomas Decaux
  • 1,289
  • 12
  • 13
0

By default nginx uses file system to store it's cache.

You can use the nginx caching alongside the tmpfs (.i.e create a memory backed file system) in order to use memory as your cache storage

check this out for further information:

https://wtit.com/blog/2022/09/27/nginx-caching/