10

Spark now has two machine learning libraries - Spark MLlib and Spark ML. They do somewhat overlap in what is implemented, but as I understand (as a person new to the whole Spark ecosystem) Spark ML is the way to go and MLlib is still around mostly for backward compatibility.

My question is very concrete and related to PCA. In MLlib implementation there seems to be a limitation of the number of columns

spark.mllib supports PCA for tall-and-skinny matrices stored in row-oriented format and any Vectors.

Also, if you look at the Java code example there is also this

The number of columns should be small, e.g, less than 1000.

On the other hand, if you look at ML documentation, there are no limitations mentioned.

So, my question is - does this limitation also exists in Spark ML? And if so, why the limitation and is there any workaround to be able to use this implementation even if the number of columns is large?

Kobe-Wan Kenobi
  • 3,694
  • 2
  • 40
  • 67
  • Interesting question. I have seen many other inconsistencies in the mllib documentation. – Rob Oct 26 '16 at 13:02

1 Answers1

3

PCA consists in finding a set of decorrelated random variables that you can represent your data with, sorted in decreasing order with respect to the amount of variance they retain.

These variables can be found by projecting your data points onto a specific orthogonal subspace. If your (mean-centered) data matrix is X, this subspace is comprised of the eigenvectors of X^T X.

When X is large, say of dimensions n x d, you can compute X^T X by computing the outer product of each row of the matrix by itself, then adding all the results up. This is of course amenable to a simple map-reduce procedure if d is small, no matter how large n is. That's because the outer product of each row by itself is a d x d matrix, which will have to be manipulated in main memory by each worker. That's why you might run into trouble when handling many columns.

If the number of columns is large (and the number of rows not so much so) you can indeed compute PCA. Just compute the SVD of your (mean-centered) transposed data matrix and multiply it by the resulting eigenvectors and the inverse of the diagonal matrix of eigenvalues. There's your orthogonal subspace.

Bottom line: if the spark.ml implementation follows the first approach every time, then the limitation should be the same. If they check the dimensions of the input dataset to decide whether they should go for the second approach, then you won't have problems dealing with large numbers of columns if the number of rows is small.

Regardless of that, the limit is imposed by how much memory your workers have, so perhaps they let users hit the ceiling by themselves, rather than suggesting a limitation that may not apply for some. That might be the reason why they decided not to mention the limitation in the new docs.

Update: The source code reveals that they do take the first approach every time, regardless of the dimensionality of the input. The actual limit is 65535, and at 10,000 they issue a warning.

cangrejo
  • 2,189
  • 3
  • 24
  • 31
  • Thanks for your answer, sorry for my late response. So at the end, do you possibly know what approach did they implement, both approaches, or only the first one (does the limit exist)? And why did they take the number of 1.000 columns, that's like 64MB ((8*10^3)^2, 8 bytes per double value) of data, if I'm not wrong, that should fit in memory of any executor? – Kobe-Wan Kenobi Nov 03 '16 at 11:05
  • 1
    A look at the code is enlightening. In MLLib they compute X^T X using a BLAS operation for the outer product of the rows, i.e. the first approach. I see no indication that they do a check in order to adopt the second approach. They do check a couple of things, though: first, that the number of columns is less than 65536, just to be able to compute the necessary allocation for the upper half of the matrix (which is symmetric). Second, that the number of columns is less than 10,000. Otherwise they just issue a warning regarding the necessary memory. – cangrejo Nov 03 '16 at 13:55
  • 1
    As to why they chose to set the recommended limit at 1000 in the docs, well, maybe they just chose a more or less reasonable number under which no one should expect any trouble, without too much rigour. Even though any worker can take a matrix of that size nowadays, it's often advised to avoid too big map tasks, so maybe that's why they chose that number. – cangrejo Nov 03 '16 at 13:57
  • 3
    Oh, and in ml they just call MLLib. – cangrejo Nov 03 '16 at 14:03