0

I'm using the JQuery Terminal to emulate a CLI terminal on a web page. I can easily pipe the commands back and forth between the front end and the backend ( where I'm running Perl CGI script ). However, I'm wondering how can I pipe those commands into the actual shell prompt once it gets to my Perl code? Wouldn't I be "closing" a terminal each time a Perl script runs a command? If so - I'm assuming I'll lose the session every time.

jcubic
  • 61,973
  • 54
  • 229
  • 402
kidalex
  • 1,437
  • 2
  • 12
  • 14
  • 2
    Start a shell in a subprocess, and send the commands as they're typed to its STDIN, and meanwhile copy the STDOUT and STDERR back to the terminal. http://perldoc.perl.org/IPC/Open3.html will be helpful. – Barmar Aug 27 '13 at 00:23
  • 1
    You could use [tmux](http://en.wikipedia.org/wiki/Tmux) or screen to run sessions on the background – salva Aug 27 '13 at 08:41

2 Answers2

2

Firstly, terminal emulation generally happens character by character. You can do it line by line, but you won't be able to use interactive editors like emacs or vi, or fancy shell stuff like command completion. Your system will probably want to speak https via the CGI interface and then speak via stdin/stdout/stderr pipes to the command line process on the back end. These pipes will need to be held open for the duration of the session, and have graceful handling for timeouts etc.

A CGI script is going to start, and then finish executing for every page request (which might mean every character), so the script wouldn't be able to hold a terminal open by itself (because it will exit). You could theoretically use a daemon process and have the CGI script talk to that (maybe using screen or something else as was suggested above). This isn't very efficient though.

Probably the best approach is to run some kind of in-process web server that isn't going to respawn a language interpreter for every keystroke. Mod_perl might be a valid approach, but I don't know if you can hold objects across requests using it. Another issue you have is what happens when the shell on the backend produces a character... you're going to need a "comet" setup that retrieves characters that get written to the console on the remote end, asyncronously. This is usually done by having an open http request at all times that the server can wait and then respond to on demand; the client re-requests in preparation for the next character(s) immediately after the previous ones are retrieved.

There are a few solutions (including some open source ones) out there that do similar things already; I would think that this would be functionality you'd want to incorporate rather than building from scratch. There are a lot of details to get right just to get it to work, and the potential security pitfalls could be pretty epic as well.

Fiid
  • 1,852
  • 11
  • 22
0

First of all, please be aware that what you are doing is probably a very bad idea (read: has security issues). Allowing a web client to execute arbitrary commands on your machine is dangerous. I hope all of this runs over SSL and requires a login … in which case you have basically reinvented SSH.

You can code up a basic REPL for the shell like

while read line; do
  eval $line;
done

which reads commands from STDIN, and prints to STDOUT & STDERR. When you hook this up with IPC::Open3, you can write commands from Perl that are interpreted in the same shell session.

amon
  • 57,091
  • 2
  • 89
  • 149
  • It is for a storage appliance so it's behind a firewall but yes it'll be user authenticated. It'll hit the Perl script as http request so not STDIN or STDOUT - it has to be params and print. – kidalex Aug 27 '13 at 00:42
  • @kidalex With IPC::Open3 you can give the shell command new file handles for I/O – it doesn't use *your* STD{IN,OUT,ERR}. Therefore it is completely irrelevant how your script is called. Just skim the docs I linked to in order to understand what that module does. – amon Aug 27 '13 at 00:46
  • What are other security concerns? Also, it'll validate for a certain first keyword that will be the only one allowed – kidalex Aug 27 '13 at 00:46
  • Plus, any user who will have access to Web UI will have access to shell anyway – kidalex Aug 27 '13 at 00:47
  • @kidalex I don't know where to start with the problems… This solution uses `eval`. That alone is a gigantic hole. I don''t know how escapes, quotations etc. interact with that. Only one line at a time is read, so heredocs or line continuations etc. would break. Checking for an allowed first keyword? Might not be sufficient – and in that case, it might not be neccessary to run all commands in the same shell. It would be more secure to transmit an intermediate representation (e.g. JSON) of the command to the server, which then rejects the request or assembles the shell code. – amon Aug 27 '13 at 00:54
  • Yes, the Perl script will examine the string before piping it to shell. It has to be user based too. – kidalex Aug 27 '13 at 01:18