1

I need to design solution for below problem statement.

Problem Statement- Perl process gets initiated as user requests and takes some tasks(say n) to be done; then it handover this to a daemon process which is written in C++. Now out of those n tasks, if m tasks (m < n) has been comapleted then the output of these m taks should be returned immediately to same or different perl process and then rest result of n-m tasks when it gets completed.Now my problem is the communication between C++ and Perl. What should be the best way in term of speed of execution.

Problem Constraint- Perl code must be in Perl only, not in C++ due to some reasons.There will be multiple client, to send requet to C++ daemon.

Possible Solution- While submitting request to daemon I will return an unique ID to perl process.Then C++ daemon and the same or different perl process will use that ID to write/read output on/from one of the below.

1-FEFO Message Queue -- single Queue will be there; messages will be distinguioued by ID

2-mmap utility (memory mapped file) -- each file name will be uniq id

What seems to me that the benefit of second approach is speed of execution and further caching can be implemented very easily. But I am not sure that it will be the best solution.

Can anyone please suggest me how should I design solution? I may be missing some important aspects than please guide me for the same.

Gaurav Pant
  • 4,029
  • 6
  • 31
  • 54
  • What is your operating system? in Linux, you can use pipes as IPC – Miguel Prz Jul 07 '13 at 07:19
  • yes it is Linux. Will pipes be fast enough? – Gaurav Pant Jul 07 '13 at 07:30
  • 1
    I think is one of the fastest approaches you can implement. And it's easy: https://metacpan.org/module/IO::Pipe – Miguel Prz Jul 07 '13 at 11:07
  • You can prove to yourself that pipes are plenty fast. Try this experiment: `dd if=/dev/zero bs=1M count=1k | dd of=/dev/null`. That will copy 1GB from `/dev/zero` to `/dev/null` through a pipe. On my main system, it ran about 450MB/sec, fast enough to send a DVD's worth of stuff in ~11sec. On my ancient 10-year old Duron system, it ran about 87MB/sec. Unless you're sending enormous amounts of data, use pipes. Far easier to use and to synchronize. – Joe Z Jul 07 '13 at 17:14
  • if i will use pipe than daemon process will put data for multiple processes into the same pipe. I need to allow multiple process read data from that pipe.As we know that reading data from pipe is single time process than it will make the situation complex.What i am thinking is to use a shared memory(RAM space) and than divide it into parts of same size. for each process i will write data into one or more of these segments. Retrieving of data will be random and fast too. please suggest something for the same. – Gaurav Pant Jul 08 '13 at 10:59
  • Does the Perl program connect to the C++ daemon via a socket? – Len Jaffe Jul 08 '13 at 21:17
  • no it does not.. I am looking for creating RAMdisk. there i will write information on file.Than different process will read from there. Now problem is that how should i tell a perl process that file has been written. Again two solutions for this -- either regularly check whether file exists or when daemon writes into a file than send a signal to respective perl process. I am not sure which will be best approach. Also i am looking for c++ API to deal with RAM. – Gaurav Pant Jul 09 '13 at 05:58

0 Answers0