0

I need to call one python script from another script,I'm trying to do it with the help of execfile function.I need to pass a dictionary as an argument to the calling function.Is there any possibility to do that?

import subprocess
from subprocess import Popen
-------To read the data from xls----- 
ret_lst = T_read("LDW_App05")

for each in ret_lst:
    lst.append(each.replace(' ','-'))
    lst.append(' ')


result = Popen(['python','LDW_App05.py'] + lst ,stdin = subprocess.PIPE,stdout = subprocess.PIPE).communicate()
print result

Here,in the above code I'm reading the Input data from the Excel sheet in the form of list,I need to pass the list as an argument to LDW_App05.py file

surya
  • 75
  • 1
  • 2
  • 13
  • 2
    First explain why you want to do this via execfile. – Daniel Roseman Jul 19 '17 at 08:48
  • Regarding the doc: https://docs.python.org/2/library/functions.html#execfile There are "global" "local" parameters which should help you – Benjamin Jul 19 '17 at 08:53
  • I am developing a script which will read the input and do some process and call the another python script with the processed result as an argument.I tried to use os.system,subprocess.popen and subprocess.call which didn't work in my case – surya Jul 19 '17 at 08:56
  • You may want to share your subprocess.popen approach and which kind of data you want to share between the 2 scripts - that makes it easier for someone to help you. – Maurice Meyer Jul 19 '17 at 09:11
  • Provided that you control both scripts, if you want to pass complex/structured data (`dict`, `list`, etc.) why not have your second script pick it up from STDIN? Then your first script can serialize the data any way you please and pass it to the second script's STDIN without the typical command line arguments limits. – zwer Jul 19 '17 at 10:06
  • sorry!!!!I didn't get it can you please give me some example @zwer – surya Jul 19 '17 at 11:36

1 Answers1

2

Instead of passing complex data as CL arguments, I propose piping your data via the STDIN/STDOUT - then you don't need to worry about escaping special, shell-significant chars and exceeding the maximum command line length.

Typically, as CL argument-based script you might have something like app.py:

import sys

if __name__ == "__main__":  # ensure the script is run directly
    if len(sys.argv) > 1:  # if at least one CL argument was provided 
        print("ARG_DATA: {}".format(sys.argv[1]))  # print it out...
    else:
        print("usage: python {} ARG_DATA".format(__file__))

It clearly expects an argument to be passed and it will print it out if passed from another script, say caller.py:

import subprocess

out = subprocess.check_output(["python", "app.py", "foo bar"])  # pass foo bar to the app
print(out.rstrip())  # print out the response
# ARG_DATA: foo bar

But what if you want to pass something more complex, let's say a dict? Since a dict is a hierarchical structure we'll need a way to present it in a single line. There are a lot of formats that would fit the bill, but let's stick to the basic JSON, so you might have your caller.py set to something like this:

import json
import subprocess

data = {  # our complex data
    "user": {
        "first_name": "foo",
        "last_name": "bar",
    }
}
serialized = json.dumps(data)  # serialize it to JSON
out = subprocess.check_output(["python", "app.py", serialized])  # pass the serialized data
print(out.rstrip())  # print out the response
# ARG_DATA: {"user": {"first_name": "foo", "last_name": "bar"}}

Now if you modify your app.py to recognize the fact that it's receiving JSON as an argument you can deserialize it back to Python dict to access its structure:

import json
import sys

if __name__ == "__main__":  # ensure the script is run directly
    if len(sys.argv) > 1:
        data = json.loads(sys.argv[1])  # parse the JSON from the first argument
        print("First name: {}".format(data["user"]["first_name"]))
        print("Last name: {}".format(data["user"]["last_name"]))
    else:
        print("usage: python {} JSON".format(__file__))

Then if you run your caller.py again you'll get:

First name: foo
Last name: bar

But this is very tedious and JSON is not very friendly to the CL (behind the scenes Python does a ton of escaping to make it work) not to mention there is a limit (OS and shell depending) on how big your JSON can be passed this way. It's much better to use STDIN/STDOUT buffer to pass your complex data between processes. To do so, you'll have to modify your app.py to wait for input on its STDIN, and for caller.py to send serialized data to it. So, app.py can be as simple as:

import json

if __name__ == "__main__":  # ensure the script is run directly
    try:
        arg = raw_input()  # get input from STDIN (Python 2.x)
    except NameError:
        arg = input()  # get input from STDIN (Python 3.x)
    data = json.loads(arg)  # parse the JSON from the first argument
    print("First name: {}".format(data["user"]["first_name"]))  # print to STDOUT
    print("Last name: {}".format(data["user"]["last_name"]))  # print to STDOUT

and caller.py:

import json
import subprocess

data = {  # our complex data
    "user": {
        "first_name": "foo",
        "last_name": "bar",
    }
}

# start the process and pipe its STDIN and STDOUT to this process handle:
proc = subprocess.Popen(["python", "app.py"], stdin=subprocess.PIPE, stdout=subprocess.PIPE)
serialized = json.dumps(data)  # serialize data to JSON
out, err = proc.communicate(serialized)  # send the serialized data to proc's STDIN
print(out.rstrip())  # print what was returned on STDOUT

and if you invoke caller.py you again get:

First name: foo
Last name: bar

But this time there is no limit to the data size you're passing over to your app.py and you don't have to worry if a certain format would be messed up during shell escaping etc. You can also keep the 'channel' open and have both processes communicate with each other in a bi-directional fashion - check this answer for an example.

zwer
  • 24,943
  • 3
  • 48
  • 66