I would like to rewrite this from R to Pyspark, any nice looking suggestions?
array <- c(1,2,3)
dataset <- filter(!(column %in% array))
In pyspark you can do it like this:
array = [1, 2, 3]
dataframe.filter(dataframe.column.isin(array) == False)
Or using the binary NOT operator:
dataframe.filter(~dataframe.column.isin(array))
Take the operator ~ which means contrary :
df_filtered = df.filter(~df["column_name"].isin([1, 2, 3]))
df_result = df[df.column_name.isin([1, 2, 3]) == False]
slightly different syntax and a "date" data set:
toGetDates={'2017-11-09', '2017-11-11', '2017-11-12'}
df= df.filter(df['DATE'].isin(toGetDates) == False)
You can use the .subtract()
buddy.
Example:
df1 = df.select(col(1),col(2),col(3))
df2 = df.subtract(df1)
This way, df2 will be defined as everything that is df that is not df1.
*
is not needed. So:
list = [1, 2, 3]
dataframe.filter(~dataframe.column.isin(list))
You can also loop the array and filter:
array = [1, 2, 3]
for i in array:
df = df.filter(df["column"] != i)
You can also use sql functions .col
+ .isin()
:
import pyspark.sql.functions as F
array = [1,2,3]
df = df.filter(~F.col(column_name).isin(array))
This might be useful if you are using sql functions and want consistency.