8

I want to send some data to a root process with a named pipe. Here is the script and it works great:

#!/bin/sh
pipe=/tmp/ntp

if [[ ! -p $pipe ]]; then
    mknod -m 666 $pipe p
fi

while true
do
    if read line <$pipe; then
         /root/netman/extra/bin/ntpclient -s -h $line > $pipe 2>&1
    fi
done

I actually have several script like this one. I would like to enclose all of them in a single script. The problem is that execution blocks on the first "read" and I cannot execute multiple "reads" in a single process. Isn't there anything I can do? Is it possible to have a "non-blocking" bash read?

michelemarcon
  • 23,277
  • 17
  • 52
  • 68
  • Why do you want to combine separate operations into a single script? If they each work correctly standalone, leave them standalone. It's much easier than trying to bend the shell into doing non-blocking reads. Processes are cheap. Simple processes are also more secure than complex ones, and root processes need to be secure. – Jonathan Leffler Feb 02 '11 at 14:05
  • I would agree with you, but each process eats 628K of RAM (it is a copy of the bash) and I am in an embedded environment. I would prefer to save as much memory as possible. – michelemarcon Feb 02 '11 at 14:09
  • If it is that much of a problem, write the code in C. – Jonathan Leffler Feb 02 '11 at 18:33

3 Answers3

22

Bash's read embedded command has a -t parameter to set a timeout:

-t timeout
    Cause read to time out and return failure if a complete line of input is not
    read within timeout seconds. This option has no effect if read is not reading
    input from the terminal or a pipe.

This should help you solve this issue.

Edit:

There are some restrictions for this solution to work as the man page indicates: This option has no effect if read is not reading input from the terminal or a pipe.

So if I create a pipe in /tmp:

mknod /tmp/pipe p

Reading directly from the pipe is not working:

$ read -t 1 </tmp/pipe  ; echo $?

Hangs forever.

$ cat /tmp/pipe | ( read -t 1 ; echo $? )
1

It is working but cat is not exiting.

A solution is to assign the pipe to a file descriptor:

$ exec 7<>/tmp/pipe

And then read from this file descriptor either using redirection:

$ read -t 1 <&7  ; echo $?
1

Or the -u option of read:

$ read -t 1 -u 7  ; echo $?
1
gabuzo
  • 7,378
  • 4
  • 28
  • 36
  • The *this option has no effect if read is not reading input from the terminal or a pipe.* has some implication. I'm editing the answer to include working examples – gabuzo Feb 02 '11 at 15:42
  • Still doesn't work for me, but maybe I'm running a customized bash. – michelemarcon Feb 03 '11 at 11:15
  • @michelemarcon `read` with timeout option is in bash. I think you are using `dash`, what shebang you are using? `/bin/sh`? – Majid Azimi Jul 28 '12 at 07:12
  • @Majid Azimi GNU bash, version 2.05a.0(1)-release (arm-unknown-linux-gnu) – michelemarcon Jul 30 '12 at 08:06
  • 5
    It is not necessary to use `exec`. The trick rather is, that your suggestion `exec 7<>FILE` opens FILE in read/write mode and thus avoids blocking on a pipe, if there are no writers. But opening the pipe in read-write mode can also be done directly with the read builtin: `read -t 1 <>/tmp/pipe line` will read one line and wait at most one second for that line. – Kai Petzke Feb 25 '15 at 17:29
  • @KaiPetzke is right. `exec` is not the important part. The read-write redirection is important. `exec` with read-only redirection fails the same way as read with read-only redirection. It's somehow ugly to have to open a write redirection just to get a timeout behavior, but at least it works. – Stéphane Gourichon Aug 14 '16 at 16:26
3

Just put the reading cycle into background (add & after done)?

  • Great! I've halved memory consumption! – michelemarcon Feb 02 '11 at 14:56
  • @michelemarcon: are you sure you're saving memory? When I tested it, adding `&` forced the while loop to execute in a subshell = another process = more memory used. – Gordon Davisson Feb 02 '11 at 16:16
  • Tested with ps, each script eats 628K. With '&', each process eats 240K. And BTW, since every 'while' is on background, the "mother" script exited and freed its memory – michelemarcon Feb 02 '11 at 16:41
  • 2
    @GordonDavisson subshells instances of bash can utilize the COW semantics of fork() on modern UNIX systems, while separately fork-exec-ed bash instances cannot. – xiaq Jun 18 '13 at 13:51
-1

You can use stty to set a timeout. IIRC its something like

stty -F $pipe -icanon time 0
Foo Bah
  • 25,660
  • 5
  • 55
  • 79