0

I have a working Wafefront program using MPI express. What happens in this program is that for a matrix of n x m there are n processes. Each process is assigned a row. Each process does the following:

for column = 0 to matrix_width do:
    1) x = get the value of this column from the row above (rank - 1 process)
    2) y = Get the value left of us (our row, column-1)
    3) Add to our current column value: (x + y)

So on the master process I will declare an array of n x m elements. Each slave process should thus allocate an array of length m. But as it stands in my solution each process has to allocate an array of n x m for the scatter operation to work, otherwise I get a nullpointer (if I assign it null) or an out of bounds exception (if I instantiate it with new int[1]). I'm sure there has to be a solution to this, otherwise each process would require as much memory as the root.

I think I need something like allocatable in C.

Below the important part is the one marked "MASTER". Normally I would pull the initialization into the if(rank == 0) test and initialize the array with null (not allocating the memory) in the else branch but that does not work.

package be.ac.vub.ir.mpi;

import mpi.MPI;
// Execute: mpjrun.sh -np 2 -jar parsym-java.jar

/**
 * Parallel and sequential implementation of a prime number counter
 */
public class WaveFront
{
    // Default program parameters
    final static int size = 4;
    private static int rank;
    private static int world_size;

    private static void log(String message)
    {
        if (rank == 0)
            System.out.println(message);
    }

    ////////////////////////////////////////////////////////////////////////////
    //// MAIN //////////////////////////////////////////////////////////////////
    ////////////////////////////////////////////////////////////////////////////
    public static void main(String[] args) throws InterruptedException
    {
        // MPI variables
        int[] matrix;         // matrix stored at process 0
        int[] row;            // each process keeps its row
        int[] receiveBuffer;  // to receive a value from ``row - 1''
        int[] sendBuffer;     // to send a value to ``row + 1''

        /////////////////
        /// INIT ////////
        /////////////////

        MPI.Init(args);
        rank = MPI.COMM_WORLD.Rank();
        world_size = MPI.COMM_WORLD.Size();

        /////////////////
        /// ALL PCS /////
        /////////////////

        // initialize data structures
        receiveBuffer = new int[1];
        sendBuffer = new int[1];
        row = new int[size];

        /////////////////
        /// MASTER //////
        /////////////////
        matrix = new int[size * size];
        if (rank == 0)
        {
            // Initialize matrix
            for (int idx = 0; idx < size * size; idx++)
                matrix[idx] = 0;
            matrix[0] = 1;
            receiveBuffer[0] = 0;
        }

        /////////////////
        /// PROGRAM /////
        /////////////////
        // distribute the rows of the matrix to the appropriate processes
        int startOfRow = rank * size;
        MPI.COMM_WORLD.Scatter(matrix, startOfRow, size, MPI.INT, row, 0, size, MPI.INT, 0);

        // For each column each process will calculate it's new values.
        for (int col_idx = 0; col_idx < size; col_idx++)
        {
            // Get Y from row above us (rank - 1).
            if (rank > 0)
                MPI.COMM_WORLD.Recv(receiveBuffer, 0, 1, MPI.INT, rank - 1, 0);
            // Get the X value (left from current column).
            int x = col_idx == 0 ? 0 : row[col_idx - 1];

            // Assign the new Z value.
            row[col_idx] = row[col_idx] + x + receiveBuffer[0];

            // Wait for other process to ask us for this value.
            sendBuffer[0] = row[col_idx];
            if (rank + 1 < size)
                MPI.COMM_WORLD.Send(sendBuffer, 0, 1, MPI.INT, rank + 1, 0);
        }

        // At this point each process should be done so we call gather.
        MPI.COMM_WORLD.Gather(row, 0, size, MPI.INT, matrix, startOfRow, size, MPI.INT, 0);

        // Let the master show the result.
        if (rank == 0)
            for (int row_idx = 0; row_idx < size; ++row_idx)
            {
                for (int col_idx = 0; col_idx < size; ++col_idx)
                    System.out.print(matrix[size * row_idx + col_idx] + " ");
                System.out.println();
            }

        MPI.Finalize(); // Don't forget!!
    }
}
Christophe De Troyer
  • 2,852
  • 3
  • 30
  • 47
  • can you please provide details about which MPJ Express version you are using? – Aleem Jan 04 '15 at 11:24
  • I tested your code on my laptop with MPJ Expres Ver 0.43 and it executed fine. However, I had to change `size` value to `world_size` to make it work. These are outputs I got for np 2 and np 4 respectively `1 1 1 2` `1 1 1 1 1 2 3 4 1 3 6 10 1 4 10 20` – Aleem Jan 04 '15 at 11:36
  • @Aleem : Yes, the code in fact works. My MPJ version is 0.43 as well. The point is that in this code, `matrix = new int[size * size];` is outside of the `if` test, making each process allocate space for the entire array. I want to put it **in** the if-test and in the else branch I would like to put `matrix = null` or something, such that the slave processes do not need to allocate the entire array in their JVM. – Christophe De Troyer Jan 04 '15 at 13:39

0 Answers0