0

I'm asking on behalf of a friend working in numerical astrophysics.

Basically what he's doing is simulating a cloud of gas. There are a finite number of cells and the timestep is defined such that gas cannot cross more than one cell each step. Each cell has properties like density and temperature. Each timestep, these (and position) need to be calculated. It's mainly position that's the issue I believe as that is affected primarily by the interactions of gravity among the cells, all of which affect each other.

At the moment he's running this on a cluster of ~150 nodes but I wondered, if it's parallelizable like this, could it be run faster on a few GPUs with CUDA? At the moment it takes him a couple of days to finish a simulation. As GPUs generally have ~500 cores, it seemed like they could provide a boost.

Maybe I'm totally wrong.

user478250
  • 259
  • 2
  • 9
  • 1
    Are there similar such programs written for CUDA/OpenCL? How do they differ? –  Aug 03 '12 at 06:45
  • @pst it looks like there are some fluid dynamics examples around but each cell only interacts with its adjoining cells whereas in reality, the interaction due to gravity occurs between all cells, – user478250 Aug 03 '12 at 10:00
  • 1
    Yes, astrophysics simulations are very appropriate for running on GPGPU hardware and even FPGAs. See project [GRACE](http://www.ari.uni-heidelberg.de/grace/). – Hristo Iliev Aug 03 '12 at 11:34

3 Answers3

1

Yes this sounds like a decent application for a GPU. GPU processing is most effective when it's running the same function on a large data set. If you've already got it running in parallel on a cluster computer, I'd say write it and test it on a single graphics card, and see if that's an improvement on a single cluster, then scale accordingly.

VoronoiPotato
  • 3,113
  • 20
  • 30
1

The task you describe is a good fit for the GPU. GPUs have successfully been used for dramatically improving the performance in areas such as particle, aerodynamics and fluid simulations.

Roger Dahl
  • 15,132
  • 8
  • 62
  • 82
-1

Without knowing more details about the simulation it's impossible to say for sure whether it would gain a performance boost. Broadly speaking, algorithms that are memory bound ( that is, relatively few arithmetic operations per memory transaction ) tend to benefit most from offloading to the GPU.

For astrophysics simulations specifically, the following link may be of use : http://www.astrogpu.org/

Andrew Marshall
  • 1,469
  • 13
  • 14
  • 2
    "Broadly speaking, algorithms that are memory bound ( that is, relatively few arithmetic operations per memory transaction ) tend to benefit most from offloading to the GPU." It's the other way around -- It's compute bound algorithms that tend to benefit the most from being ported to the GPU. – Roger Dahl Aug 03 '12 at 19:03
  • What I should have said is that applications with high Data Level parallelism are more likely to benefit, which would have been a more accurate statement. GPU's generally have a higher bandwidth-to-FLOP ratio than CPUs, so memory-bound kernels will benefit, but the overall view is complicated. See [Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU](http://dl.acm.org/citation.cfm?id=1816021) for research on the issue. – Andrew Marshall Aug 08 '12 at 08:32