I have a Django web server running on an AWS EC2 instance with 1GB of RAM. When a certain request is made to the web server, I need to run an executable using subprocess.call('./executable')
. The executable runs a Perl script which does some file I/O and then some computations on the data parsed from the files, nothing too crazy.
I began running into memory allocation issues which caused my web server to crash, so I messed around setting hard limits on the virtual memory allocated to each subprocess using ulimit -v some_value
. I discovered that each subprocess requires around 100MB to run without erroring out, so it's no surprise that I'm running into memory issues with only 1GB of RAM.
I'm wondering, though, why this memory usage is so high. Is a lot of extra memory being allocated because I'm calling subprocess.call
from a process that's running a web server which is memory-intensive? Is running an executable that runs a Perl script necessarily memory intensive because Perl has some overhead or something? Would it use much less memory if the Perl script were re-written in Python and run directly in the Django web server?
Would greatly appreciate any and all help on this one. Thanks!