59

I have a dataframe in Spark in which one of the columns contains an array.Now,I have written a separate UDF which converts the array to another array with distinct values in it only. See example below:

Ex: [24,23,27,23] should get converted to [24, 23, 27] Code:

def uniq_array(col_array):
    x = np.unique(col_array)
    return x
uniq_array_udf = udf(uniq_array,ArrayType(IntegerType()))

Df3 = Df2.withColumn("age_array_unique",uniq_array_udf(Df2.age_array))

In the above code, Df2.age_array is the array on which I am applying the UDF to get a different column "age_array_unique" which should contain only unique values in the array.

However, as soon as I run the command Df3.show(), I get the error:

net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.core.multiarray._reconstruct)

Can anyone please let me know why this is happening?

Thanks!

ashleedawg
  • 20,365
  • 9
  • 72
  • 105
Preyas
  • 773
  • 1
  • 7
  • 12
  • 1
    aside: for anyone looking to sum arrays that get similar errors (in pyspark): u_sum = udf(lambda x: sum(x.tolist())). Here x can be a VectorUDT. Posting here as searching for that error yields this page as the first result. – Quetzalcoatl Jun 16 '18 at 20:22

6 Answers6

77

The source of the problem is that object returned from the UDF doesn't conform to the declared type. np.unique not only returns numpy.ndarray but also converts numerics to the corresponding NumPy types which are not compatible with DataFrame API. You can try something like this:

udf(lambda x: list(set(x)), ArrayType(IntegerType()))

or this (to keep order)

udf(lambda xs: list(OrderedDict((x, None) for x in xs)), 
    ArrayType(IntegerType()))

instead.

If you really want np.unique you have to convert the output:

udf(lambda x: np.unique(x).tolist(), ArrayType(IntegerType()))
Community
  • 1
  • 1
zero323
  • 322,348
  • 103
  • 959
  • 935
  • 3
    `numpy` messed me up too. Personally, I was using `hypot`, `radians`, and `cos`, all of which are also available in `math`, whose versions don't have this problem, so I simply switched `from numpy` to `from math` – MichaelChirico May 21 '19 at 10:42
  • 2
    2022 and this is still relevant FYI – andrew Apr 23 '22 at 20:55
9

You need to convert the final value to a python list. You implement the function as follows:

def uniq_array(col_array):
    x = np.unique(col_array)
    return list(x)

This is because Spark doesn't understand the numpy array format. In order to feed a python object that Spark DataFrames understand as an ArrayType, you need to convert the output to a python list before returning it.

brass monkey
  • 5,841
  • 10
  • 36
  • 61
user1632287
  • 91
  • 1
  • 1
5

I also got this error when my UDF returns a float but I forget to cast it as a float. I need to do this:

retval = 0.5
return float(retval)
Clem Wang
  • 689
  • 8
  • 14
  • 1
    I got the error "expected zero arguments for construction of ClassDict (for numpy.dtype)" , and fix it in this way – Ethan Wu Aug 18 '21 at 01:57
  • So spark udf must return a primitive data type, not a data type from numpy? – panc Aug 24 '22 at 04:49
1

As of pyspark version 2.4, you can use array_distinct transformation.
http://spark.apache.org/docs/latest/api/python/pyspark.sql.html#pyspark.sql.functions.array_distinct

ARCrow
  • 1,360
  • 1
  • 10
  • 26
1

Below Works fine for me

udf(lambda x: np.unique(x).tolist(), ArrayType(IntegerType()))
0
[x.item() for x in <any numpy array>]

converts it to plain python.

Vukašin Manojlović
  • 3,717
  • 3
  • 19
  • 31
Cello4ever
  • 41
  • 5