I need to run applications submitted by users. My code looks like:
def run_app(app_path):
inp = open("app.in", "r")
otp = open("app.out", "w")
return subprocess.call(app_path, stdout=otp, stdin=inp)
Now since I have no control over what users will submit, I want to restrict the size of the output of the application. Other things like trying to access unauthorized system resources and abusing of CPU cycles are being restricted by apparmor rule enforcement. The maximum time allowed to run is being handled by parent process (in python). Now a rogue application can still try to flood the server system by writing a lot of data to its stdout knowing the the stdout is being saved to a file.
I do not want to use AppArmors RLIMIT or anything in kernel mode for stdout/stderr files. It would be great to be able to do it from python using standard library.
I am currently thinking about creating a subclass of file and on each write check how much data has been written already to the stream. Or create a memory mapped file with maximum length set.
But I am feeling there may be a simpler way to restrict file size I do not see it yet.