I need to find a way to compute the matrix inverse of some kind of distributed data type in Spark. The data is purely numerical and any way to perform this operation in Rowmatrices/DataFrames/RDDs would be incredibly useful. While there are some Stack Overflow posts on things like this, they involve conversion to local data types which is simply not feasible for the amount of data I am handling.
I've looked through using breeze for Scala and DenseMatrices in Spark but it seems as though these are not distributed and may not be as scalable as needed.