0

I feel a bit dumb asking this, but here it goes.

I'm trying to implement a very simple ftp client in bash, for testing purposes, and thought I could use a clean approach at reading/writing from/to the socket using exec and process substitution, as follows.

exec 3<> /dev/tcp/$host/$port
exec 4< <(dos2unix <&3)
exec 5> >(unix2dos >&3)

I would then read from fd 4 and write to fd 5 in order to send commands and receive responses.

Alas, whilst writing works like a charm, reading doesn't: dos2unix just gets stuck as if waiting for input that never arrives. Using any other command in place of dos2unix shows the same behavior, but using a real character device in place of /dev/tcp, say /dev/urandom, works as expected.

Am I doing something fundamentally wrong, or what is the problem?

Fabio A.
  • 2,517
  • 26
  • 35
  • try `stdbuf -o L`. Or really just `sed -u`. Other then that, please post a [MCVE]. – KamilCuk Apr 13 '20 at 23:02
  • @KamilCuk you probably meant `stdbuf -o 0` there? It actually helped, thanks! Do you want to post that as an answer rather than just a comment? – Fabio A. Apr 28 '20 at 09:05

1 Answers1

1

Alas, whilst writing works like a charm, reading doesn't:

In most linux shells by default the standard output stream of commands are line buffered. In most linux shells by default the standard output stream of commands in a pipe are block buffered (except the last command).

To restore line buffering of a command it's common to use stdbuf -oL utilitye.

KamilCuk
  • 120,984
  • 8
  • 59
  • 111