1

I have a sandbox program which uses setrlimit() to limit the output file size of another program run under its control, which I run like so:

sandbox -max 2048 /usr/bin/mono --debug myprogram.exe <p1 >r1 2>r2

The "-max 2048" switch tells the sandbox to limit the output to a maximum of 2K bytes.

If an exception occurs inside "myprogram.exe", or if I deliberately write something to stderr from inside "myprogram.exe", it appears in r2 as expected. However, when the file size limit is exceeded, I get this error message:

File size limit exceeded (core dumped)

but instead of being written to the error log r2 as expected, it comes out on the console. Can anyone tell me why this is happening? Is there any way I can arrange for this message to be written to r2 along with everything else?

user1636349
  • 458
  • 1
  • 4
  • 21

2 Answers2

1

It looks like an error that would be reported by the shell, not the program. The shell's stdout/stderr aren't being redirected anywhere.

hymie
  • 1,982
  • 1
  • 13
  • 18
  • Why the shell? The sandbox is where setrlimit is called, so why should the error be reported by a totally separate enclosing process? I have meanwhile tried using parentheses and other devices to run my sandbox in a separate process and redirecting stderr in the enclosing process; no effect. – user1636349 Aug 15 '14 at 10:44
  • I have finally succeeded in grabbing the error by putting the sandbox command in a file x and running ./x >r1 2>r2, but I'm getting confused which process is which. Case 1: sandbox foo 2>r2: the shell starts a process to run sandbox, with stderr redirected to r2, then sandbox starts a process to run foo. Case 2: the shell starts a process to execute ./x, with stderr redirected to r2, then ./x starts a process to run the sandbox which starts a process to run foo. Using (sandbox foo) 2>r2 doesn't work, neither does bash -c "sandbox foo" 2>r2. – user1636349 Aug 15 '14 at 11:37
  • OK, sorry, the above should say "sandbox _execs_ foo". No subprocess involved. – user1636349 Aug 15 '14 at 11:50
  • When you redirect stdout/stderr , you are affecting output that the program itself creates, in the programming of that program. The program does not, itself, monitor the file system for max-file-size. It depends on the operating system to do that. So the program itself is not going to either notice the problem nor report it. The operating system is going to see the problem, and notify the shell to do something about it. The shell is going to abort your program and explain why with an error message. – hymie Aug 15 '14 at 19:39
  • Thanks, I marked this as answering the question. I'm still slightly puzzled; if I do (sandbox foo) 2>r2, the command "sandbox foo" should be run by a subshell, and the subshell's stderr should be redirected, which should include the file size error. Instead I see exactly the same behaviour. Ditto using bash -c. Any explanation? – user1636349 Aug 17 '14 at 09:41
0

Most probably it open another file descriptor (lets say 3), assign it to stderr (3>&2), and then although you redirected 2>r2 descriptor 3 is still attached to console. Thus printing something on 3 will output to console. You can try to list all open descriptors at /proc/self/fd/ to see if this is the case.

  • Thanks, good idea. I tried running /bin/ls inside the sandbox. The contents are 0, 1 and 2 (all of which are links to /dev/pts/1) and 3, which is a link to the parent process proc/$$/fd directory. So no luck there... – user1636349 Aug 15 '14 at 11:22