I have been looking at HPX (https://github.com/STEllAR-GROUP/hpx) as a potential mechanism for making applications more scalable.
I believe HPX is primarily targeted at (and therefore optimised for) the HPC community who typically have clusters of nodes with many code with fast interconnects between them. The parallelX model doens't require this but of course your performance will degrade due to the higher cost of passing data between nodes.
On the other end of the spectrum we have a suite of Java frameworks including hadoop, spark & flink. These come out of the commercial community and addressing different sorts of workload.
So what's in it if you were choosing between them (ignoring C++ vs Java flamewars)
If considering purely on performance grounds how do they compare in terms of overheads?
Granted it depends heavily on the kind of problem you are trying to solve. I'd like to understand the trade-offs better.