1

If I have text file (textfile) with lines of text and a Perl file (perlscript) of

#!/usr/bin/perl
use strict;
use warnings;
my $var; 
$var.=$_ while (<>) ;
print $var;

And the command run in terminal

cat ./textfile | ./perlscript | ./perlscript | ./perlscript

If I run the above code on a 1kb text file, other than the program stack etc., have I used 4Kb of memory? Or when I pull from STDIN, have I freed that memory, therefore, I would only use 1 Kb?

To word the above question another way, is copying from STDIN to a variable effectively neutral in memory usage? Or doubling memory consumption?

Benjamin W.
  • 46,058
  • 19
  • 106
  • 116
hoffmeister
  • 612
  • 4
  • 10
  • You could put a long sleep just before your print statement, sleeps in the middle of your STDIN input loop, and observe memory usage with a larger file. My *guess* is that you'll see peaks of 2x the file size dropping to 1x the file size. – struthersneil Aug 02 '17 at 22:27
  • Note that Perl data structures take yet more memory. From what I recall, roughly 2-3 times the size of data (or more) and the whole program's footprint is yet greater. There is a related discussion with some more detail in [this post](https://stackoverflow.com/a/43270307/4653379). But that's a fixed factor, not affecting @hobbs's answer. – zdim Aug 02 '17 at 23:56
  • Btw: you need not invoke `cat`, but can do `script1.pl < input.txt | script.pl ...`. This way `input.txt` also goes to `script1.pl`'s `STDIN` – zdim Aug 02 '17 at 23:59

2 Answers2

2

More like 2kB, but a 1kB file isn't a very good example as your read buffer is probably bigger than that. Let's make the file 1GB instead. Then your peak memory usage would probably be around 2GB plus some overhead. cat uses negligible memory, just shuffling its input to its output. The first perl process has to read all of that input and store it in $var, using 1GB (plus a little bit). Then it starts writing it to the second one, which will store it into its own private $var, also using 1GB (plus a little bit), so we're up to 2GB. When the first perl process finishes writing, it exits, which closes its stdout, causing the second perl process to get EOF on stdin, which is what makes the while(<>) loop terminate and the second perl proces to start writing. At this point the third perl process starts reading and storing into its own $var, using another 1GB, but the first one is gone, so we're still in the neighborhood of 2GB. Then the second perl process ends, and the third starts writing to stdout, and exits itself.

hobbs
  • 223,387
  • 19
  • 210
  • 288
  • That's great thanks, but why doesn't `cat | ` take 1GB of memory when putting the output to STDOUT? – hoffmeister Aug 02 '17 at 22:32
  • @hoffmeister because it doesn't read the whole thing before it starts writing. It just reads a little chunk, writes it, and then forgets about it. – hobbs Aug 02 '17 at 22:34
2

You've already got a good answer, but I wasn't satisfied with my guess, so I decided to test my assumptions.

I made a simple C++ program called streamstream that just takes STDIN and writes it to STDOUT in 1024-byte chunks. It looks like this:

#include <stdio.h>

int main() 
{
    const int BUF_SIZE = 1024;

    unsigned char* buf = new unsigned char[BUF_SIZE];

    size_t read = fread(buf, 1, BUF_SIZE, stdin);

    while(read > 0)
    {
        fwrite(buf, 1, read, stdout);
        read = fread(buf, 1, BUF_SIZE, stdin);
    }

    delete buf;
}

To test how the program uses memory, I ran it with valgrind while piping the output from one to another as follows:

cat onetwoeightk | valgrind --tool=massif ./streamstream | valgrind --tool=massif ./streamstream | valgrind --tool=massif ./streamstream | hexdump

...where onetwoeightk is just a 128KB file of random bytes. Then I used the ms_print tool on the massif output to aid in interpretation. Obviously there is the overhead of the program itself and its heap, but it starts at about 80KB and never grows beyond that, because it's sipping STDIN just one kilobyte at a time.

The data is passed from process to process 1 kilobyte at a time. Our overall memory usage will peak at 1 kilobyte * the number of instances of the program handling the stream.

Now let's do what your perl program is doing--I'll read the whole stream (growing my buffer each time) and then write it all to STDOUT. Then I'll check the valgrind output again.

#include <stdio.h>
#include <stdlib.h>
#include <string.h>

int main() 
{
    const int BUF_INCREMENT = 1024;

    unsigned char* inbuf = (unsigned char*)malloc(BUF_INCREMENT);
    unsigned char* buf = NULL;
    unsigned int bufsize = 0;

    size_t read = fread(inbuf, 1, BUF_INCREMENT, stdin);

    while(read > 0)
    {
        bufsize += read;
        buf = (unsigned char *)realloc(buf, bufsize);
        memcpy(buf + bufsize - read, inbuf, read);

        read = fread(inbuf, 1, BUF_INCREMENT, stdin);
    }

    fwrite(buf, 1, bufsize, stdout);

    free(inbuf);
    free(buf);
}

Unsurprisingly, memory usage climbs to over 128 kilobytes over the execution of the program.

KB
 137.0^                                                                      :#
      |                                                                     ::#
      |                                                                   ::::#
      |                                                                  :@:::#
      |                                                                :::@:::#
      |                                                              :::::@:::#
      |                                                            :@:::::@:::#
      |                                                          :@:@:::::@:::#
      |                                                         ::@:@:::::@:::#
      |                                                       :@::@:@:::::@:::#
      |                                                     :@:@::@:@:::::@:::#
      |                                                   @::@:@::@:@:::::@:::#
      |                                                 ::@::@:@::@:@:::::@:::#
      |                                               :@::@::@:@::@:@:::::@:::#
      |                                              @:@::@::@:@::@:@:::::@:::#
      |                                            ::@:@::@::@:@::@:@:::::@:::#
      |                                          :@::@:@::@::@:@::@:@:::::@:::#
      |                                        @::@::@:@::@::@:@::@:@:::::@:::#
      |                                      ::@::@::@:@::@::@:@::@:@:::::@:::#
      |                                    ::::@::@::@:@::@::@:@::@:@:::::@:::#
    0 +----------------------------------------------------------------------->ki
      0                                                                   210.9

But the question is, what is the total memory usage due to this approach? I can't find a good tool for measuring the memory footprint over time of a set of interacting processes. ps doesn't seem accurate enough here, even when I insert a bunch of sleeps. But we can work it out: the 128KB buffer is only freed at the end of program execution, after the stream is written. But while the stream is being written, another instance of the program builds its own 128KB buffer. So we know our memory usage will climb to 2x 128KB. But it won't rise to 3x or 4x 128KB by chaining more instances of our program, as our instances free their memory and close as soon as they are done writing to STDOUT.

struthersneil
  • 2,700
  • 10
  • 11