0

I've been trying to implement a piping structure in a shell program and it works if I do simple commands, such as "hello | rev"

But it hangs when I try to do "head -c 1000000 /dev/urandom | wc -c" (Ignore quotes)

My implementation is:

             int fd[2];
             pipe(fd);
            // IN CHILD 
           // Piping for the first command
              if (isPiped && (e == list_begin(&p->commands))) 
               {

                  close(fd[0]);
                  dup2(fd[1], 1);
                  close(fd[1]);

               }

               // Last command in the pipe
               else if (isPiped && (list_next(e) ==   list_tail(&p->commands))) 
               {

                  close(fd[1]);
                  dup2(fd[0], 0);
                  close(fd[0]);

               }

      // IN PARENT
     if (isPiped && (e == list_begin(&p->commands))) 
           {
             close(fd[1]);
           }

           else if (isPiped && (list_next(e) == list_tail(&p->commands))) 
           {
              close(fd[0]);
           }

I've been taught to always close a file descriptor after I'm done using it, and I thought that's what I'm doing - but I'm having a file descriptor leak somewhere and I can't figure out where. I've been trying to do many combination of closing and dup2'ing the fd, but to no avail.

To further give a complete question, this is the main relevant code:

The way I'm doing it is to use a list structure that adds each command/job onto a list. The variable "e" is an element of the list.

int main(int ac, char *argv[]) {

int numPipes = list_size(&commands) - 1;
bool isPiped = false;
if (numPipes > 0)
   isPiped = true;

int fd[2];
pipe(fd);

pid_t pid = fork();
// In child
if (pid == 0) 
{
    if (isPiped && (e == list_begin(&p->commands))) 
    {       
      close(fd[0]);
      dup2(fd[1], 1);
      close(fd[1]);              
    }

   // Last command in the pipe
   else if (isPiped && (list_next(e) == list_tail(&p->commands))) 
   {            
    close(fd[1]);
    dup2(fd[0], 0);
    close(fd[0]);             
   }
// command is a struct. I have it set up so that the terminal can read in what the user inputs
execvp(command->argv[0], command->arg);

 }
// In parent
 if (isPiped && (e == list_begin(&p->commands))) 
 {
  close(fd[1]);
 }

 else if (isPiped && (list_next(e) == list_tail(&p->commands))) 
 {
   close(fd[0]);
 }
 int status;
 waitpid(-1, &status, WUNTRACED);
}

That's all there is to my pipe algorithm. The rest are just for other built in jobs, such as foreground, background, kill commands and io redirection. Thanks so much!

user3171597
  • 447
  • 1
  • 6
  • 17
  • At the risk of self-deprecation, you may find [this answer](http://stackoverflow.com/questions/19356075/toy-shell-not-piping-correctly/19357317#19357317) interesting. – WhozCraig Feb 19 '15 at 03:29

1 Answers1

0

The parent process should close both ends of the pipe after it has launched the two children. If it doesn't close the writing end of the pipe, the child that is reading from the pipe will never get EOF, so it will never terminate. If it doesn't close the reading end of the pipe but the child that is reading terminates before it has read all the input, the child that is writing will never get an error and will block, waiting for the parent to read when the parent has no intention of ever reading.

So, the parent simply needs:

close(fd[0]);
close(fd[1]);
Jonathan Leffler
  • 730,956
  • 141
  • 904
  • 1,278
  • Thanks for your feedback! I've tried to close both fd's in the parent in both if-else-if statements, but it'll revert back to hanging at basic commands such as: "echo hi | rev". I've tried to code it outside of the conditional statements, but it still hangs. The only time when it doesn't is when I leave the closing statements alone inside the conditional statements. – user3171597 Feb 19 '15 at 03:43
  • Then we're going to need an MCVE ([How to create a Minimal, Complete, and Verifiable Example?](http://stackoverflow.com/help/mcve)) or SSCCE ([Short, Self-Contained, Correct Example](http://sscce.org/)) — two names and links for the same idea. There are a large number of other things that you could be doing wrong; I don't want to expend effort trying to guess what. You should have a main program that sets up the data structures for a couple of cases (`echo hi | rev` and `head -c 1000000 /dev/urandom | wc -c` works fine, though there should be no difference in the treatment of the two. – Jonathan Leffler Feb 19 '15 at 04:10
  • Yeah, there are many irrelevant things in my code and I wouldn't want to trouble you with it. And I just edited. Thanks! – user3171597 Feb 19 '15 at 04:32
  • @user3171597: The update is no help — it is not compilable. The WUNTRACED option to `waitpid()` is rather specialized; you do not need to be using it. You should probably be capturing the PID returned by `waitpid()`. You probably need a loop if your shell has two direct children. Since there's only one `fork()` in the code, it is as if your shell self-destructs by running the two commands. After the `execvp()`, you should probably report an error and should definitely exit. No-one is going to be able to help you without code that accurately reproduces your problem. – Jonathan Leffler Feb 19 '15 at 04:46
  • @user3171597: I'll renew my offer. As long as the code compiles reasonably cleanly, I'll look at the lot. It will be easier than trying to pry the relevant information out of you in dribs and drabs. See my profile. – Jonathan Leffler Feb 19 '15 at 04:51
  • You desperately need to use these wonderful things called functions that split up acres of code into manageable chunks. Your main is something of a travesty (almost 300 lines after I've removed the comments), which makes it makes it very hard to read. Your problem is that your code waits for the first process in the pipeline to complete before launching the second, but you can't do that. The `head -c 1000000 /dev/urandom` command fills the pipe and is blocked from doing further I/O until something reads the pipe. But `wc -c` hasn't even been started yet. Run all commands in a pipeline at once. – Jonathan Leffler Feb 19 '15 at 06:29