0

I'm faced with dividing a 2d fluid dynamics model amongst a number a processors. The model is represented in a number of arrays, and each processer gets a nxn block of the array to work on, but each coordinate is affected by horizontally and vertically adjacent cells so communication between adjacent processors is required. My current plan is to just manually specify the boundary as contiguous data type for the top and bottom and type vectors for the sides, but it occurred to me that this is likely a common use case for MPI (given there are so many 'game of life' examples), so is there a cleverer way of communicating array "boundary data" to adjacent processors? Thanks.

Chironex
  • 809
  • 3
  • 10
  • 28
  • In terms of MPI data types, it doesn't get any cleverer than this. – Hristo Iliev Feb 24 '13 at 09:24
  • Sad face. Thanks for the confirmation. – Chironex Feb 24 '13 at 12:55
  • Why the sad face? Just build your own abstraction on top of MPI that allows you to easily manipulate distributed arrays with halos. Libraries like [Global Arrays](http://www.emsl.pnl.gov/docs/global/) already provide that, but at the cost of higher overhead (although GA's manual explicitly warns against using it in cases like yours). – Hristo Iliev Feb 24 '13 at 14:17

0 Answers0