2

I am new to MPI. I want to send three ints to three slave nodes to create dynamic arrays, and each arrays will be send back to master. According to this post, I modified the code, and it's close to the right answer. But I got breakpoint when received array from slave #3 (m ==3) in receiver code. Thank you in advance!

My code is as follow:

#include <mpi.h>
#include <iostream>
#include <stdlib.h>

int main(int argc, char** argv)
{
    int firstBreakPt, lateralBreakPt;
    //int reMatNum1, reMatNum2;
    int tmpN;

    int breakPt[3][2]={{3,5},{6,9},{4,7}};

    int myid, numprocs;
    MPI_Status status;

//  double *reMat1;
//  double *reMat2;


    MPI_Init(&argc,&argv);
    MPI_Comm_rank(MPI_COMM_WORLD,&myid);
    MPI_Comm_size(MPI_COMM_WORLD,&numprocs);

    tmpN = 15;

    if (myid==0)
    {
        // send three parameters to slaves;
        for (int i=1;i<numprocs;i++)
        {
            MPI_Send(&tmpN,1,MPI_INT,i,0,MPI_COMM_WORLD);

            firstBreakPt = breakPt[i-1][0];
            lateralBreakPt = breakPt[i-1][1];           

            //std::cout<<i<<" "<<breakPt[i-1][0] <<" "<<breakPt[i-1][1]<<std::endl;

            MPI_Send(&firstBreakPt,1,MPI_INT,i,1,MPI_COMM_WORLD);
            MPI_Send(&lateralBreakPt,1,MPI_INT,i,2,MPI_COMM_WORLD);
        }

        // receive arrays from slaves;
        for (int m =1; m<numprocs; m++)
        {
            MPI_Probe(m, 3, MPI_COMM_WORLD, &status);

            int nElems3, nElems4;
            MPI_Get_elements(&status, MPI_DOUBLE, &nElems3);

            // Allocate buffer of appropriate size
            double *result3 = new double[nElems3];
            MPI_Recv(result3,nElems3,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);

            std::cout<<"Tag is 3, ID is "<<m<<std::endl;
            for (int ii=0;ii<nElems3;ii++)
            {
                std::cout<<result3[ii]<<std::endl;
            }

            MPI_Probe(m, 4, MPI_COMM_WORLD, &status);
            MPI_Get_elements(&status, MPI_DOUBLE, &nElems4);

            // Allocate buffer of appropriate size
            double *result4 = new double[nElems4];
            MPI_Recv(result4,nElems4,MPI_DOUBLE,m,4,MPI_COMM_WORLD,&status);

            std::cout<<"Tag is 4, ID is "<<m<<std::endl;
            for (int ii=0;ii<nElems4;ii++)
            {
                std::cout<<result4[ii]<<std::endl;
            }
        }
    }
    else
    {
        // receive three paramters from master;
        MPI_Recv(&tmpN,1,MPI_INT,0,0,MPI_COMM_WORLD,&status);

        MPI_Recv(&firstBreakPt,1,MPI_INT,0,1,MPI_COMM_WORLD,&status);
        MPI_Recv(&lateralBreakPt,1,MPI_INT,0,2,MPI_COMM_WORLD,&status);

        // width
        int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
        int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;

        // create dynamic arrays
        double *reMat1 = new double[width1*width1];
        double *reMat2 = new double[width2*width2];

        for (int n=0;n<width1; n++)
        {
            for (int j=0;j<width1; j++)
            {
                reMat1[n*width1+j]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX); 
                //a[i*Width+j]=1.00;
            }
        }

        for (int k=0;k<width2; k++)
        {
            for (int h=0;h<width2; h++)
            {
                reMat2[k*width2+h]=(double)rand()/RAND_MAX + (double)rand()/(RAND_MAX*RAND_MAX); 
                //a[i*Width+j]=1.00;
            }
        }

        // send it back to master
        MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);
        MPI_Send(reMat2,width2*width2,MPI_DOUBLE,0,4,MPI_COMM_WORLD);
    }

    MPI_Finalize();

    std::cin.get();

    return 0;
}

P.S. This code is the right answer.

Community
  • 1
  • 1
just_rookie
  • 873
  • 12
  • 33
  • 2
    Questions seeking debugging help ("why isn't this code working?") must include the desired behavior, a specific problem or error and the shortest code necessary to reproduce it in the question itself. Questions without a clear problem statement are not useful to other readers. – Zulan Mar 14 '16 at 12:38
  • Thank you for your comment, I have added some information to the post. – just_rookie Mar 14 '16 at 13:00
  • 2
    In your example the root process has all the information to compute the size, but you write that you do not know it. Can you please clarify? Anyway I would recommend taking a look at MPI collective operations `MPI_Bcast` and `MPI_Gather` / `MPI_Gatherv`. In C++ I would also recommend to consider Boost.MPI - but your code doesn't really look like you are using C++. – Zulan Mar 14 '16 at 14:47
  • Thank your for your suggestion. I have modified the code. I want to send three parameters to slave nodes to create two dynamic arrays which have different size, and return these arrays to root. – just_rookie Mar 15 '16 at 14:19
  • 1
    Based on your clarifications I believe the [answer](http://stackoverflow.com/a/25118624/620382) you linked seems to answer your question as well. Just follow that for your `MPI_Recv(reMat1...`. You will probably want to store the arrays from each slave separately - either in an array of pointers or in a large common one. And then again... look at `MPI_Gatherv`. – Zulan Mar 15 '16 at 17:03
  • Why have you tagged your question with C++ and C++11? I can't find any C++ constructs in your code. – Daniel Langr Mar 15 '16 at 18:20
  • @Zulan I just want root to send and receive data, but participate in computing, so maybe `MPI_Bcast` and `MPI_Gather` / `MPI_Gatherv` are not for this situation. – just_rookie Mar 16 '16 at 01:35
  • @Zulan I followed you suggestion, and I got the right answer (updated in the post). If you answer this question, I will accept it as the answer. Thank you again. – just_rookie Mar 16 '16 at 03:52

2 Answers2

2

Use collective MPI operations, as Zulan suggested. For example, first thing your code does is that the root sends to all the slaves the same value, which is broadcasting, i.e.,MPI_Bcast(). Then, the root sends to each slave a different value, which is scatter, i.e., MPI_Scatter().

The last operation is that the slave processes send to the root variably-sized data, for which exists the MPI_Gatherv() function. However, to use this function, you need to:

  1. allocate the incoming buffer by the root (there is no malloc() for reMat1 and reMat2 in the first if-branch of your code), therefore, the root needs to know their count,
  2. tell MPI_Gatherv() on the root how many elements will be received from each slave and where to put them.

This problem can be easily solved by so-called parallel prefix, look at MPI_Scan() or MPI_Exscan().

Daniel Langr
  • 22,196
  • 3
  • 50
  • 93
0

Here you create randomized width

    int width1 = (rand() % (tmpN-firstBreakPt+1))+ firstBreakPt;
    int width2 = (rand() % (tmpN-lateralBreakPt+1))+ lateralBreakPt;

which you later use to send data back to process 0

    MPI_Send(reMat1,width1*width1,MPI_DOUBLE,0,3,MPI_COMM_WORLD);

But it expects different number of

    MPI_Recv(reMat1,firstBreakPt*tmpN*firstBreakPt*tmpN,MPI_DOUBLE,m,3,MPI_COMM_WORLD,&status);

which causes problems. It does not know what sizes each slave process generated so you have to send them back the same way you did for sending sizes to them.

ftynse
  • 787
  • 4
  • 9