1

I would like to communicate with a (remote) non-interactive shell via its stdin/stdout to run multiple commands and read the outputs. The problem is that if I stuff multiple commands on shell stdin, I am not able to detect the boundaries between outputs of individual commands.

In Python-like pseudo-code:

sh = Popen(['ssh', 'user@remote', '/bin/bash'], stdin=PIPE, stdout=PIPE)
sh.stdin.write('ls /\n')
sh.stdin.write('ls /usr\n')
sh.stdin.close()
out = sh.stdout.read()

But obviously out contains the outputs of both commands concatenated, and I have no way of reliably splitting them.

So far my best idea is to insert \0 bytes between the outputs:

sh.stdin.write('ls /; echo -ne "\0"\n')
sh.stdin.write('ls /usr; echo -ne "\0"\n')

Then I can split out on zero characters.

Other approaches that don't work for me:

  • I don't want to run a separate ssh session per command, as the handshake is too heavyweight.
  • I'd prefer not to force ControlMaster options on the created shells to respect end-user's ssh_config.
  • I'd prefer to not need require users to install specific server programs.

Is there a better way of running several commands in one session and getting individual outputs? Is there a widely-deployed shell with some sort of binary output mode?

PS. There is a duplicate question, but it doesn't have a satisfactory answer: Run multiple commands in a single ssh session using popen and save the output in separate files

Community
  • 1
  • 1
Nicht Verstehen
  • 421
  • 2
  • 11
  • 1
    If you can run a script on the remote system like Python, you could always pass the commands to it and have their output encoded in a pickle or json. Your approach seems OK to me though if you can guarantee the output will never contain null characters. It's similar to a multi-part mime encoding except that uses boundary strings that usually contain long random characters to minimize the chance of conflict. – Kurt Stutsman Mar 07 '16 at 17:10
  • 1
    Can you change your process to a loop? For each command, write the command, flush the buffer, then read the results? – Mr. Llama Mar 07 '16 at 17:10
  • 1
    Mr. Llama, there is no way to detect that an issued command has finished running (as opposed to just working to produce next chunk of output). – Nicht Verstehen Mar 07 '16 at 17:12

1 Answers1

1

For SSH I used paramiko and its invoke_shell method to create a programmatically-manageable shell instance.

The following is not a complete answer, it's still hacky, but I feel it's a step in the right direction.

I required the same read/write shell instance functionality in Windows but have had no luck, so I extended your approach a little (thank you for the idea by the way).

I verify each command executes successfully based on its exit code by placing a conditional exit between each command, then I use the text of said conditional check (a known string) as the delimiter to define each command's response.

A crude example:

from subprocess import Popen, PIPE

sh = Popen('cmd', stdin=PIPE, stdout=PIPE)
sh.stdin.write(b'F:\r\n')
sh.stdin.write(b"if not %errorlevel% == 0 exit\r\n")
sh.stdin.write(b'cd F:\\NewFolder\r\n')
sh.stdin.write(b"if not %errorlevel% == 0 exit\r\n")
sh.stdin.write('...some useful command with the current directory confirmed as set to F:\NewFolder...')
sh.stdin.close()
out = sh.stdout.read()
sh.stdout.close()
# Split 'out' by each line that ends with 'if not %errorlevel% == 0 exit' and do what you require with the responses
Bosco
  • 935
  • 10
  • 18