4

I trying to send variables of type mpfr_t using MPI_Scatter. For example:

mpfr_t *v1 = new mpfr_t[10];  
mpfr_t *v2 = new mpfr_t[10];   
MPI_Scatter(v1, 5, MPI_BYTE, v2, 5, MPI_BYTE, 0, MPI_COMM_WORLD ); 
for (int i = 0; i < 5; i++) 
    mpfr_printf("value rank %d -  %RNf \n", ProcRank, v2[i]);

It prints:

value rank 0 - nan
value rank 0 - nan
value rank 0 - nan
.....
value rank 1 - nan
value rank 0 - nan

But it's work of MPI_Bcast. What I do wrong. Code C/C++, MPI lib is OpenMPI-1.6.

Jonathan Dursi
  • 50,107
  • 9
  • 127
  • 158
Sidny Sho
  • 41
  • 1
  • You seem to forget to multiply the sendcount by the size of mpfr_t. Why are you allocating arrays of 10 if you have only 5 processes? – Dima Chubarov Jun 16 '12 at 15:34
  • Yes, size of array is 5. If I multiply the count by sizeof(mpfr_t), it work. Thanks. How I can override reduce functions? Forexample MPI_MINLOC and MPI_MAXLOC for variables of mpfr type? – Sidny Sho Jun 16 '12 at 22:23
  • 1
    You should define your own reduction operators and then register them using `MPI_Op_create()`. See [here](http://www.mpi-forum.org/docs/mpi-11-html/node80.html). – Hristo Iliev Jun 17 '12 at 21:01
  • What does this have to do with reduction operators? MPI_Scatter does not use them at all. – timos Jun 19 '12 at 00:07

2 Answers2

1

You specified the sendcount as 5 and the datatype as MPI_BYTE. This seems odd. If you want to use MPI_BYTE and you want to send 5 mpfr_t values then specify a sendcount of 5*sizeof(mpfr_t). Another option would be to create your own MPI derived datatype (if you want to get rid of the sizeof()).

timos
  • 2,637
  • 18
  • 21
  • 2
    Is the entirety of the MPFR data encapsulated in that allocation? I've never been sure that part of that struct wasn't a pointer to heap memory. – Jeff Hammond Oct 26 '15 at 03:46
  • 1
    @JeffHammond : An MPFR number is a one-element array of a `struct` that has 4 elements. One of them, `_mpfr_d` is, unfortunately, a pointer to an array of "limbs" allocated on the heap. See https://www.mpfr.org/mpfr-current/mpfr.html#Internals . – András Aszódi Dec 11 '20 at 13:07
0

As @LaryxDecidua already pointed out, MPFR numbers use dynamic memory on the heap and thus cannot be used in MPI operations directly. However, if you can use MPFR version 4.0.0 or later, you can use the MPFR function mpfr_fpif_export to serialize numbers to a linear memory buffer and send this buffer with MPI instead. The receiving end can restore the original numbers by calling mpfr_fpif_import. Both functions operate on FILE * handles, so you also need to use open_memstream and / or fmemopen to map the file handle to a memory buffer. Unfortunately, the length of the serialized numbers does not only depend on the precision but also on the value (e.g. the value 0 occupies fewer bytes than other numbers), so group communications like MPI_Scatter or MPI_Gather which need fixed buffer sizes for every rank won't work. Use MPI_Send and MPI_recv pairs instead.

This is a complete C++17 example with mpreal which provides a nice interface to MPFR numbers for C++ code bases:

#include <cstdio>
#include <iostream>
#include <mpi.h>
#include <mpreal.h>
#include <numeric>
#include <optional>


int main(int argc, char **argv) {
    MPI_Init(&argc, &argv);
    int world_rank;
    MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);
    int world_size;
    MPI_Comm_size(MPI_COMM_WORLD, &world_size);


    std::vector<mpfr::mpreal> real_vec{world_rank, M_PI};

    // Serialize the mpreal vector to memory (send_buf)
    char *send_buf;
    size_t send_buf_size;
    FILE *real_stream = open_memstream(&send_buf, &send_buf_size);
    for (auto &real : real_vec) {
        mpfr_fpif_export(real_stream, real.mpfr_ptr());
    }
    fclose(real_stream);

    // Gather the buffer length of all processes
    std::optional<std::vector<size_t>> send_buf_size_vec;
    if (world_rank == 0) {
        send_buf_size_vec = std::vector<size_t>(world_size);
    }
    MPI_Gather(&send_buf_size, 1, MPI_UNSIGNED_LONG, (send_buf_size_vec ? send_buf_size_vec->data() : nullptr), 1,
               MPI_UNSIGNED_LONG, 0, MPI_COMM_WORLD);

    if (world_rank != 0) {
        // Send the serialized mpreal vector to rank 0
        MPI_Send(send_buf, send_buf_size, MPI_BYTE, 0, 0, MPI_COMM_WORLD);
    } else {
        // Create a recv buffer which can hold the data from all processes
        size_t recv_buf_size = std::accumulate(send_buf_size_vec->begin(), send_buf_size_vec->end(), 0UL);
        std::vector<char> recv_buf(recv_buf_size);
        auto all_buf_it = recv_buf.begin();

        // Directly copy the send buffer of process 0
        std::memcpy(&*all_buf_it, send_buf, send_buf_size);
        all_buf_it += (*send_buf_size_vec)[0];

        // Receive serialized numbers from all other ranks
        MPI_Status status;
        for (int i = 1; i < world_size; ++i) {
            MPI_Recv(&*all_buf_it, (*send_buf_size_vec)[i], MPI_BYTE, i, MPI_ANY_TAG, MPI_COMM_WORLD, &status);
            all_buf_it += (*send_buf_size_vec)[i];
        }

        // Import all mpreals from the receive buffer
        real_stream = fmemopen(recv_buf.data(), recv_buf.size(), "rb");
        std::vector<mpfr::mpreal> all_real_vec(world_size * real_vec.size());
        for (auto &real : all_real_vec) {
            mpfr_fpif_import(real.mpfr_ptr(), real_stream);
        }
        fclose(real_stream);

        // Print all received values
        std::cout << "Read values:" << std::endl;
        for (auto &real : all_real_vec) {
            std::cout << real << std::endl;
        }
    }

    MPI_Finalize();

    return 0;
}
IngoMeyer
  • 303
  • 4
  • 7