0

I am loaded a large word2vec language model in Python. Each time I run the program, I need to load the model into memory.

I'm running the same program with different command line arguments from a shell script, e.g.

#!/bin/bash
python processor.py -ad
python processor.py -td
python processor.py -ds

Is there anything I can do to keep the language model in memory after the program finishes running, or will I just need to modify the python code itself to loop through the different iterations after the model is loaded?

Adam_G
  • 7,337
  • 20
  • 86
  • 148
  • 1
    Once a python interpreter finishes executing the code it frees up all the memory and there's no way to get it back. Unless you want to create a separate process which will load your model and then remain running waiting for input from separate scripts (via datagram sockets, for example) you'll have to modify your `processor.py` to accept multiple arguments and iterate through them, executing each without exiting... – zwer May 26 '17 at 23:10
  • It would probably be simplest to change the python program to loop through the options, using `argparse`. Alternatively it might be possible to `pickle` the language model, i.e. serialise it into a file for later reuse, but that's not always possible or desirable. – cdarke May 27 '17 at 09:18

1 Answers1

0

Make your Python program take its input from stdin, one line at a time. Then you can do things like this:

cat <<EOF | python processor.py
ad
td
ds
EOF

That's using a feature of Bash called a "here document." You could also launch the Python program from Bash and have it read from a named pipe (for example), so you could have it run in the background while the Bash script continues, and the Bash script could "submit" new requests to it as needed.

John Zwinck
  • 239,568
  • 38
  • 324
  • 436