0

Firstly I shall note, my program needs only work on Linux, built with a modern version of GCC so portability concerns are not terribly relevant here (the software is already using highly platform specific features for low level hardware functionality).

I have been having a problem by which I have a number of processes, a parent and some children. The parent has the receiving end of a POSIX message queue and the children each have a sending end. The children are in effect receiving messages over a network (details irrelevant here) and sending them to the parent which then processes the data and stores it. This is working fine for the vast majority of my use case.

However I have just had to add functionality by which clients over the network need to request small amounts of data as well as just sending it however sending will still be the main case.

My initial plan was to use the pipe() syscall in the child when it received this neq type of request and have it send a special message to the parent requesting the parent open the pipe and send some data back to the child. However this approach was cut short when I discovered that file descriptors can only be sent over domain sockets and not message queues.

I then realized while playing with a bash shell that it is possible to use /proc/$pidofchild/fd/$pipefd (where pidofchild is the PID of the child process and pipefd is the file descriptor of the write end of the pipe in the child) to grab the child's pipe without it having to send it to us.

So, my current plan is to have the child send it's PID and the file descriptor of the write end of it's pipe down the message queue to the parent, the parent can then open the pipe through the proc file-system, write the message to the child and then close it again.

However, looking online I have not been able to find a single case of anyone using this technique for IPC in any program whatsoever so am becoming concerned that there is some huge caveat which I have not considered and is going to make this whole plan fail, what are the concerns I need be aware of when opening another processes pipe through the /proc/x/fd/y interface?

Here is a snippet of my current code (note error checking is removed for shortness, also irrelevant lines are removed):

/* Parent Code */
static inline void ParentFunc(void)
{
    struct EasySMessage *IncomingMQMessage = malloc(MQMESSAGESIZE + 1);

    ssize_t MQueueMesSize = mq_receive(Globals->EasySMQueue
        , IncomingMQMessage, MQMESSAGESIZE + 1, NULL);
    switch(IncomingMQMessage->OpType)
    {
        case WriteMessage:
            /* Process and write the message */
            /* SNIP */
            break;
        case ReadMessage:
            char * MyString = FormatProcFDString(IncomingMQMessage);
            OpenPipeAndReply(IncomingMQMessage, MyString);
            break;
        default:
            printf("Unrecognised easy_s message type: %d!\n"
                , IncomingMQMessage->OpType);
            exit(-1);
    }
    free(IncomingMQMessage);
}

/* Child Code */
static int ChildCode(mqd_t EasySChildMQueue, struct EasySMessage *MessageProcessed)
{
    size_t SizeToSend = SizeOfMessage(MessageProcessed);
    int Pipes[2];
    pipe(Pipes);
    MessageProcessed->Pipe = Pipes[1];
    MessageProcessed->Pid = getpid();
    mq_send(EasySChildMQueue, MessageProcessed, SizeToSend, 0);
    ReadFromPipeAndReply(Pipes);
    return 0;
}
Vality
  • 6,577
  • 3
  • 27
  • 48
  • Why don't you use MQs for the other direction too? Or shared memory? Or sockets? What you're trying to do looks really odd (and isn't portable even if you get it to work reliably). – Mat Aug 15 '14 at 10:54
  • @Mat The odd behaviour comes from the fact the parent needs to use a predictable amount of memory, the use of the one way MQ is that the parent need only hold a single handler regardless of the count of children, this extends to this design as the parent need only open one pipe at once. However I might consider using some shared memory with a lock but I am guessing this makes management and atomicity much harder. – Vality Aug 15 '14 at 12:19

0 Answers0