0

I am running WordPress on a vps server with 2 cores and 4 gigabytes of RAM.

On the server I installed Ubuntu 20.04 + Nginx + Php7.4 + mariadb.

And I recently saw an article about fastcgi and changed the path where the cache file is saved from ssd to memory.

fastcgi_cache_path /dev/shm/nginx/ levels=1:2 keys_zone=example.com:6m max_size=2g inactive=60m

However, when I check with the df -h command on my server, the following result is displayed.

Filesystem Size Used Avail Use% Mounted on
udev 1.9G 0 1.9G 0% /dev
tmpfs 394M 27M 367M 7% /run
/dev/vda1 79G 57G 18G 77% /
tmpfs 2.0G 121M 1.9G 7% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/loop1 68M 68M 0 100% /snap/lxd/21835
/dev/loop2 62M 62M 0 100% /snap/core20/1270
/dev/loop4 62M 62M 0 100% /snap/core20/1328
/dev/loop5 44M 44M 0 100% /snap/snapd/14978
tmpfs 394M 0 394M 0% /run/user/0

However, I think the size of /dev/shm/ is too large.

Is it normal for 2GB of RAM to be allocated out of 4GB of RAM?

I wonder if it's necessary to scale this down, or if it's okay to just use it.

cheonmu
  • 25
  • 4

1 Answers1

1

This is the maximum amount of memory that can be used by the tmpfs. It defaults to half of available RAM.

Simon Richter
  • 3,317
  • 19
  • 19