0

/etc/init.d/clamd start

Error:

[FAILED] log gives ERROR: daemonize() failed: Cannot Allocate Memory

On Cent OS

total Mem: 510876kb

/etc/init.d/clamd start

in /var/log/clamav

ERROR: daemonize() failed: Cannot Allocate Memory?

Is this a problem that can be solved?

I thought Clamd only needed 20 - 40 mb

Says Memory Free: 273844k

Results of strace:

waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 1}], 0) = 1658
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGCHLD (Child exited) @ 0 (0) ---
waitpid(-1, 0xbff84a2c, WNOHANG)        = -1 ECHILD (No child processes)
sigreturn()                             = ? (mask now [])
rt_sigaction(SIGINT, {SIG_DFL, [], 0}, {0x80810f0, [], 0}, 8) = 0
rt_sigprocmask(SIG_BLOCK, NULL, [], 8)  = 0
read(255, "", 1694)                     = 0
exit_group(1)                           = ?

Results of strace -f:

strace -f -o /tmp/clamd.txt service clamd start

is pretty much the same, am I looking for some kind of error?

tread
  • 423
  • 2
  • 4
  • 21

2 Answers2

6

I also run into the same problem.
I observed that clamd was starting again and again growing in Memory and then breaking down with the Error:

Jun  6 08:08:32 <server> clamd[5086]: Received 0 file descriptor(s) from systemd.
Jun  6 08:08:32 <server> clamd[5086]: clamd daemon 0.99.4 (OS: linux-gnu, ARCH: x86_64, CPU: x86_64)
Jun  6 08:08:32 <server> clamd[5086]: Running as user clamupdate (UID 992, GID 990)
Jun  6 08:08:32 <server> clamd[5086]: Log file size limited to 1048576 bytes.
Jun  6 08:08:32 <server> clamd[5086]: Reading databases from /var/lib/clamav
Jun  6 08:08:32 <server> clamd[5086]: Not loading PUA signatures.
Jun  6 08:08:32 <server> clamd[5086]: Bytecode: Security mode set to "TrustSigned".
Jun  6 08:08:46 <server> clamd[5086]: Loaded 6538218 signatures.
Jun  6 08:08:48 <server> clamd[5086]: LOCAL: Unix socket file /var/run/clamd/clamd.sock
Jun  6 08:08:48 <server> clamd[5086]: LOCAL: Setting connection queue length to 4
Jun  6 08:08:48 <server> clamd[5086]: daemonize() failed: Cannot allocate memory
Jun  6 08:08:48 <server> clamd[5086]: Closing the main socket.
Jun  6 08:08:48 <server> clamd[5086]: Socket file removed.

I observed that clamd was growing in Memory up to 532 MB

# ps -o pid,size,rss,etime,start,cmd -p 16114|more
  PID  SIZE   RSS     ELAPSED  STARTED CMD
16114 580024 545672     00:15 08:18:21 /usr/sbin/clamd -c /etc/clamd.d/clamd.conf
# echo "scale=3; 545672/1024"|bc -l
532.882

I thought 532 MB would be tight but I could still fit into the small Server

# free -m
              total        used        free      shared  buff/cache   available
Mem:           1834         532         626          89         675        1004
Swap:             0           0           0

The clamd was always known to consume very much Memory, but also it seems to get bigger by the years.
So I was wondering what would consume so much Memory and analysed the Process with strace.
I found that it is actually reading all the Database Files into Memory as it states in its log Reading databases from /var/lib/clamav and creates an in-memory index with 6538218 signatures:

openat(AT_FDCWD, "/var/lib/clamav", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 5
getdents(5, /* 6 entries */, 32768)     = 176
stat("/var/lib/clamav/daily.cld", {st_mode=S_IFREG|0644, st_size=141535744, ...}) = 0
stat("/var/lib/clamav/main.cvd", {st_mode=S_IFREG|0644, st_size=117892267, ...}) = 0
stat("/var/lib/clamav/bytecode.cvd", {st_mode=S_IFREG|0644, st_size=153228, ...}) = 0
getdents(5, /* 0 entries */, 32768)     = 0
close(5)                                = 0
stat("/var/log/clamd/clamd.log", {st_mode=S_IFREG|0600, st_size=266784, ...}) = 0
write(3, "Wed Jun  6 08:08:46 2018 -> Load"..., 55) = 55
sendto(4, "<22>Jun  6 08:08:46 clamd[5086]:"..., 59, MSG_NOSIGNAL, NULL, 0) = 59

After having read all the Virus Definition into Memory it finally tries to fork a Child Process trying to clone the in-memory Index of 532 MB

clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x7fd70bb64b10) = -1 ENOMEM (Cannot allocate memory)
stat("/var/log/clamd/clamd.log", {st_mode=S_IFREG|0600, st_size=266989, ...}) = 0
write(3, "Wed Jun  6 08:08:48 2018 -> ERRO"..., 78) = 78
write(2, "ERROR: daemonize() failed: Canno"..., 50) = 50
sendto(4, "<19>Jun  6 08:08:48 clamd[5086]:"..., 75, MSG_NOSIGNAL, NULL, 0) = 75

So actually at the moment of startup it would consume the double amount of Memory that makes its in-memory Index.

Now to being able to start and run this Service I need to create a Swap Partition at least to overcome this Startup Sequence.
And as others also commented increasing System Memory helps you overcome this Startup Memory Increase.

  • really good debug, +1 – shodanshok Jun 11 '19 at 12:57
  • This is a terrific answer. I was running into this problem testing clamd on an AWS t2.micro instance. Since the fork leads to failure so quickly, I never saw the memory spike in top/vmstat. Per this answer, I added a 1G swapfile to it and clamd successfully started (it got about 500M into swap too!). – vastlysuperiorman Jun 28 '19 at 15:16
  • A little update on consumtion numbers: As of 2019-07-03 `clamd` holds **6175326 Virus Signatures** in memory which sum up to **828.062 MB RAM** usage. As of now there doesn't to be a solution for its **Scalability Issue**. – Bodo Hugo Barwich Jul 03 '19 at 14:34
  • https://bugzilla.clamav.net/show_bug.cgi?id=11017 The official Thread on the Issue confirms that currently there isn't any improvement in sight. – Bodo Hugo Barwich Jul 03 '19 at 14:59
  • That thread actually was close with "RESOLVED WONTFIX". – Niki Romagnoli Apr 18 '20 at 17:44
  • 1
    @TechNyquist yes, the whole Virus Definition Architecture needs a complete review into a more progressive one and not the "_Brute Force_" approach as it is now. Virus Definitions will grow more and more and never stop growing. [ https://en.wikipedia.org/wiki/Brute-force_search ](Brute Force Search) The Documentation very realistically states the _Brute Force_ Solution is **not scalable**. – Bodo Hugo Barwich Jul 02 '20 at 14:55
2

I've experienced the same problem, and found that saslauthd used a lot of memory, as this guy.

The problem might be a memory leak with a possible fix being described here: https://www.howtoforge.com/community/threads/saslauthd-memory-leak-fix.52750/

Tried the fix, but I'm not able to confirm it, as the problem (if still existent) will not arise within a few weeks from now.

hovmand
  • 36
  • 3