1

I would like to use MPI_Type_vector to scatter the sub-domains of a matrix to each process. For example, the matrix is 5x5, and it is decomposed by 2x2 sub-domains. So the dimensions of sub-domains are:

 _____________________
 |         |         |
 |    0    |    1    |
 |  (2,2)  |  (3,2)  |
 |         |         |
 |_________|_________|   5
 |         |         |
 |    2    |    3    |
 |  (2,3)  |  (3,3)  |
 |         |         |
 |_________|_________|

           5

I defined a MPI_Type_vector on each process with its own dimension. I expected the sizes of defined vectors on process 0 and 1 are different. But the handle of them are them same. And It looks like MPI uses only one of those defined vectors.

Thanks!

Li

PS: I have implemented this function by manually packing and unpacking the data, but I would like to use some thing more convenient.

Li Dong
  • 1,088
  • 2
  • 16
  • 27

2 Answers2

2

MPI_Datatype is just a handle that you can pass round, it doesn't directly contain any info about the type you've made. Looking at the value of that handle doesn't tell you about the type either. Most implementations I've seen use ints for this handle, incrementing by one for each user-defined datatype. So I'm not surprised that your two vector datatype handles have the same value on different cores, if they're both the first datatype declared on that core.

To come back to your main question about domain decomposition, if there's any communication between cores using differently declared vector types, that'll fail -- the sending core and the receiving core will need to be dealing with vectors of the same length. So the sending core will need to be using a type that corresponds to the amount of data the receiving core is expecting to receive.

And in terms of clean domain decomposition, I would recommend using the MPI_Cart functions (there's a Web 1.0 tutorial here).

hcarver
  • 7,126
  • 4
  • 41
  • 67
  • Thanks for providing the information about "MPI_Cart". It will ease the handling of domain decomposition with Cartesian topology. – Li Dong Sep 26 '12 at 12:41
2

MPI handles are local to the process where they are registered and should only be treated as opaque types - you should never ever decide on anything based on the actual value of the handle and you should only compare the objects behind handles using the comparison functions that MPI provides (e.g. MPI_Comm_compare). In Open MPI for example, MPI_Datatype is a pointer to an ompi_datatype_t structure for the C bindings and an INTEGER index in a pointer table for the Fortran bindings.

If your subdomains were equally sized (e.g. all were 2x2) then a nice hack with resized MPI datatypes would allow you to use MPI_Scatterv/MPI_Gatherv to scatter/gather them. Because your subdomains have different sizes, if you'd like to use a single collective MPI call to scatter/gather them, then MPI_Alltoallw with carefully provided arguments is the one that you need. You can also use it to implement the gather operation.

Community
  • 1
  • 1
Hristo Iliev
  • 72,659
  • 12
  • 135
  • 186