19

I have a big pyspark data frame. I want to get its correlation matrix. I know how to get it with a pandas data frame.But my data is too big to convert to pandas. So I need to get the result with pyspark data frame.I searched other similar questions, the answers don't work for me. Can any body help me? thanks!

Data example: data example

Shaido
  • 27,497
  • 23
  • 70
  • 73
Chemmyyu
  • 313
  • 1
  • 2
  • 7

4 Answers4

49

Welcome to SO!

Example data

I prepared some dummy data for easier replication (perhaps next time you may supply some easy to copy data, too ;-)):

data = pd.DataFrame(np.random.random((10, 5)), 
                   columns=["x{}".format(x) for x in range(5)])
df = spark.createDataFrame(data)

df.show()

And here is the data:

+-------------------+-------------------+-------------------+-------------------+--------------------+
|                 x0|                 x1|                 x2|                 x3|                  x4|
+-------------------+-------------------+-------------------+-------------------+--------------------+
| 0.9965335347601945|0.09311299224360992| 0.9273393764180728| 0.8523333283310564|  0.5040716744686445|
| 0.2341313103221958| 0.9356109544246494| 0.6377089480113576| 0.8129047787928055| 0.22215891357547046|
| 0.6310473705907303| 0.2040705293700683|0.17329601185489396| 0.9062007987480959| 0.44105687572209895|
|0.27711903958232764| 0.9434521502343274| 0.9300724702792151| 0.9916836130997986|  0.6869145183972896|
| 0.8247010263098201| 0.6029990758603708|0.07266306799434707| 0.6808038838294564| 0.27937146479120245|
| 0.7786370627473335|0.17583334607075107| 0.8467715537463528|   0.67702427694934|  0.8976402177586831|
|0.40620117097757724| 0.5080531043890719| 0.3722402520743703|0.14555317396545808|  0.7954133091360741|
|0.20876805543974553| 0.9755867281355178| 0.7570617946515066| 0.6974893162590945|0.054708580878511825|
|0.47979629269402546| 0.1851379589735923| 0.4786682088989791| 0.6809358266732168|  0.8829180507209633|
| 0.1122983875801804|0.45310988757198734| 0.4713203140134805|0.45333792855503807|  0.9189083355172629|
+-------------------+-------------------+-------------------+-------------------+--------------------+

Solution

There is a correlation function in the ml subpackage pyspark.ml.stat. However, it requires you to provide a column of type Vector. So you need to convert your columns into a vector column first using the VectorAssembler and then apply the correlation:

from pyspark.ml.stat import Correlation
from pyspark.ml.feature import VectorAssembler

# convert to vector column first
vector_col = "corr_features"
assembler = VectorAssembler(inputCols=df.columns, outputCol=vector_col)
df_vector = assembler.transform(df).select(vector_col)

# get correlation matrix
matrix = Correlation.corr(df_vector, vector_col)

If you want to get the result as a numpy array (on your driver), you can use the following:

matrix.collect()[0]["pearson({})".format(vector_col)].values

array([ 1.        , -0.66882741, -0.06459055,  0.21802534,  0.00113399,
       -0.66882741,  1.        ,  0.14854203,  0.09711389, -0.5408654 ,
       -0.06459055,  0.14854203,  1.        ,  0.33513733,  0.09001684,
        0.21802534,  0.09711389,  0.33513733,  1.        , -0.37871581,
        0.00113399, -0.5408654 ,  0.09001684, -0.37871581,  1.        ])
pansen
  • 6,433
  • 4
  • 19
  • 32
  • 1
    Thanks a lot. I got the matrix. One more question, do you know how to easily add the head (column name) to the matrix? – Chemmyyu Sep 10 '18 at 07:45
  • The column order should stay the same. Therefore, you can simply use `df.columns` to get the column names. – pansen Sep 10 '18 at 08:00
  • Hello how would I print normal the matrix corr. it's outputted like this https://i.stack.imgur.com/uBRRU.png – abdoulsn Oct 30 '19 at 01:18
  • 1
    @abdoulsn You will have to reshape the numpy array. One way to do it is shown below. Note that it's a square matrix. cor_np = matrix.collect()[0]["pearson({})".format(vector_col)].values; dim = len(cor_np); cor_mat = cor_np.reshape( (dim,dim ) ); – Gobryas Apr 17 '20 at 21:40
  • I get error when executing the second part to see the array – Rotail Jul 15 '20 at 21:34
  • @Rotail What spark version are you using? – pansen Jul 27 '20 at 08:14
  • Note of caution: Only run on numerical columns :) str_col_names = [item[0] for item in my_df.dtypes if item[1].startswith('string')] num_col_names = [item for item in my_df.schema.names if item not in str_col_names] – RndmSymbl May 20 '21 at 08:38
  • @pansen I used your correlation solution (thank you!) and it worked like a charm for many months, but since migrating to Databricks 10.4 (to Spark 3.2.1 from Spark 3.1.2) we've started getting this error: `Py4JJavaError: An error occurred while calling z:org.apache.spark.ml.stat.Correlation.corr.: java.io.StreamCorruptedException: invalid type code: 0B` Can you advise? – Molly G Hickman Aug 08 '22 at 15:45
  • Just one quick note: I was having a SparkException because the values to assemble cannot be null. The solution was skipping the null values: `df_vector = assembler.setHandleInvalid("skip").transform(df).select(vector_col)` – revy Aug 30 '22 at 10:25
  • What if we have `None` values there? It does not handle NA values. – Hadij Feb 13 '23 at 20:14
7

Building on the answer of @pansen, but to better visualize the result, you can also use...

1. easy visualization:

matrix = Correlation.corr(df_vector, 'corr_vector').collect()[0][0] 
corr_matrix = matrix.toArray().tolist() 
corr_matrix_df = pd.DataFrame(data=corr_matrix, columns = numeric_variables, index=numeric_variables) 
corr_matrix_df .style.background_gradient(cmap='coolwarm').set_precision(2)

enter image description here



2. better visualization:

import seaborn as sns 
import matplotlib.pyplot as plt

plt.figure(figsize=(16,5))  
sns.heatmap(corr_matrix_df, 
            xticklabels=corr_matrix_df.columns.values,
            yticklabels=corr_matrix_df.columns.values,  cmap="Greens", annot=True)

enter image description here

Aku
  • 660
  • 6
  • 9
  • Good job! In your first part the variable `numeric_variables` should be `df.columns` I guess. – tharndt Mar 24 '23 at 15:43
  • yes I did not look out for much coherency in my 2 examples numeric_variables = df.columns – Aku Apr 05 '23 at 13:12
1

Clearer:

from pyspark.ml.stat import Correlation
from pyspark.ml.feature import VectorAssembler

# convert to vector column first
vector_col = "corr_features"
assembler = VectorAssembler(inputCols=df.columns, outputCol=vector_col)
df_vector = assembler.transform(df).select(vector_col)
matrix = Correlation.corr(df_vector, vector_col)
cor_np = matrix.collect()[0][matrix.columns[0]].toArray()
user6273920
  • 713
  • 1
  • 7
  • 16
0

And following on from @Artur, a plotly version:

import numpy as np
import plotly.graph_objects as go

matrix = Correlation.corr(df_vector, vector_col).collect()[0][0].toArray()

# to only show one triangle
m = matrix
m[np.triu_indices(m.shape[0], 0)] = None

corr_matrix = m.tolist() 
corr_matrix_df = pd.DataFrame(data=corr_matrix, columns = numeric_columns, index=numeric_columns) 


labels = corr_matrix_df.columns.values
fig = go.Figure(data=go.Heatmap(
                    z=corr_matrix_df,
                    x = labels,
                    y = labels, 
                    text=corr_matrix_df.round(2),
                    texttemplate="%{text}",
                    textfont={"size":8},
                    colorscale='greens'
                    )
                )
fig.update_xaxes(showticklabels=False)
fig.update_yaxes(autorange="reversed")

fig.show()
user1420372
  • 2,077
  • 3
  • 25
  • 42