1

I am trying to run a user defined function (udf) for every row in collect_list_relevance in the dataframe, after running it, I hope to store the score in a separate column named discountedCumulativeGain.

relevance_df3 is of below

+------------+------------------------------------+------------------+
|new_party_id|collect_list(relevance)             |filtered_relevance|
+------------+------------------------------------+------------------+
|A09029493F  |[1, 1, 1, 0, 1, 0, 0, 1, 0, 0]      |10                |
|A09292791U  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]   |11                |
|A182C4449C  |[0, 0, 0, 1, 0, 0, 0, 2, 1, 0]      |10                |
|A182C82811  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]      |10                |
|A182V64925  |[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]      |10                |
|A182Z90277  |[0, 0, 1, 0, 0, 0, 1, 0, 0, 0]      |10                |
|A18335163I  |[1, 0, 1, 1, 0, 0, 1, 0, 0, 2]      |10                |
|A183M37466  |[1, 1, 1, 1, 1, 1, 0, 1, 0, 1]      |10                |
|A183Q6318H  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]      |10                |
|A183T9483A  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 1]      |10                |
|A18418296V  |[2, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1]   |11                |
|A18435574D  |[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]   |11                |
|A184373144  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]|12                |
|A184393490  |[0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0]   |11                |
|A18465367H  |[1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0]   |11                |
|A18482362F  |[1, 1, 1, 1, 1, 0, 1, 1, 2, 1]      |10                |
|A184E8017X  |[1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1]   |11                |
|A184H8816G  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]   |11                |
|A184L3021G  |[0, 0, 1, 0, 0, 0, 0, 0, 0, 0]      |10                |
|A184N9870U  |[0, 1, 1, 1, 0, 0, 2, 1, 0, 1]      |10                |
+------------+------------------------------------+------------------+

Below is my function

def discountedCumulativeGain(result):
    dcg = []
    for idx, val in enumerate(result): 
        numerator = 2**val - 1
        # add 2 because python 0-index
        denominator =  np.log2(idx + 2) 
        score = numerator/denominator
        dcg.append(score)
    return sum(dcg)

and converting it to a udf

discountedCumulativeGainUDF = udf(lambda z: discountedCumulativeGain(z), FloatType())

After converting and running

relevance_df4 = relevance_df3.withColumn('discountedCumulativeGain',discountedCumulativeGainUDF(col("collect_list(relevance)")))

I get this error

22/07/25 15:00:33 722 ERROR TaskSetManager: Task 0 in stage 8462.0 failed 4 times; aborting job22/07/25 15:00:33 722 ERROR TaskSetManager: Task 0 in stage 8462.0 failed 4 times; aborting job22/07/25 15:00:33 722 ERROR TaskSetManager: Task 0 in stage 8462.0 failed 4 times; aborting job
22/07/25 15:00:33 722 ERROR TaskSetManager: Task 0 in stage 8462.0 failed 4 times; aborting job

I checked online, but the syntax isn't wrong, what might be the issue here?

Full Error Trackback

Py4JJavaError: An error occurred while calling o2052.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8669.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8669.0 (TID 43207, x01gamlpapp56a.vsi.sgp.dbs.com, executor 12): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1892)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1880)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1879)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:930)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:930)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:930)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2113)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2062)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2051)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:741)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2081)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2102)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2121)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:365)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3383)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2544)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2544)
    at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2544)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2758)
    at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
    at sun.reflect.GeneratedMethodAccessor151.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
Py4JJavaError                             Traceback (most recent call last)
in engine
----> 1 relevance_df4.show()

/data/cloudera/parcels/CDH/lib/spark/python/pyspark/sql/dataframe.py in show(self, n, truncate, vertical)
    376         """
    377         if isinstance(truncate, bool) and truncate:
--> 378             print(self._jdf.showString(n, 20, vertical))
    379         else:
    380             print(self._jdf.showString(n, int(truncate), vertical))

/usr/local/lib/python3.6/site-packages/py4j/java_gateway.py in __call__(self, *args)
   1284         answer = self.gateway_client.send_command(command)
   1285         return_value = get_return_value(
-> 1286             answer, self.gateway_client, self.target_id, self.name)
   1287 
   1288         for temp_arg in temp_args:

/data/cloudera/parcels/CDH/lib/spark/python/pyspark/sql/utils.py in deco(*a, **kw)
     61     def deco(*a, **kw):
     62         try:
---> 63             return f(*a, **kw)
     64         except py4j.protocol.Py4JJavaError as e:
     65             s = e.java_exception.toString()

/usr/local/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
    326                 raise Py4JJavaError(
    327                     "An error occurred while calling {0}{1}{2}.\n".
--> 328                     format(target_id, ".", name), value)
    329             else:
    330                 raise Py4JError(

Py4JJavaError: An error occurred while calling o2052.showString.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 8669.0 failed 4 times, most recent failure: Lost task 0.3 in stage 8669.0 (TID 43207, x01gamlpapp56a.vsi.sgp.dbs.com, executor 12): net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1892)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1880)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1879)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1879)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:930)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:930)
    at scala.Option.foreach(Option.scala:257)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:930)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2113)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2062)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2051)
    at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
    at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:741)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2081)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2102)
    at org.apache.spark.SparkContext.runJob(SparkContext.scala:2121)
    at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:365)
    at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:38)
    at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:3383)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2544)
    at org.apache.spark.sql.Dataset$$anonfun$head$1.apply(Dataset.scala:2544)
    at org.apache.spark.sql.Dataset$$anonfun$53.apply(Dataset.scala:3364)
    at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3363)
    at org.apache.spark.sql.Dataset.head(Dataset.scala:2544)
    at org.apache.spark.sql.Dataset.take(Dataset.scala:2758)
    at org.apache.spark.sql.Dataset.getRows(Dataset.scala:254)
    at org.apache.spark.sql.Dataset.showString(Dataset.scala:291)
    at sun.reflect.GeneratedMethodAccessor151.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
    at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
    at py4j.Gateway.invoke(Gateway.java:282)
    at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
    at py4j.commands.CallCommand.execute(CallCommand.java:79)
    at py4j.GatewayConnection.run(GatewayConnection.java:238)
    at java.lang.Thread.run(Thread.java:748)
Caused by: net.razorvine.pickle.PickleException: expected zero arguments for construction of ClassDict (for numpy.dtype)
    at net.razorvine.pickle.objects.ClassDictConstructor.construct(ClassDictConstructor.java:23)
    at net.razorvine.pickle.Unpickler.load_reduce(Unpickler.java:707)
    at net.razorvine.pickle.Unpickler.dispatch(Unpickler.java:175)
    at net.razorvine.pickle.Unpickler.load(Unpickler.java:99)
    at net.razorvine.pickle.Unpickler.loads(Unpickler.java:112)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:90)
    at org.apache.spark.sql.execution.python.BatchEvalPythonExec$$anonfun$evaluate$1.apply(BatchEvalPythonExec.scala:89)
    at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
    at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage2.processNext(Unknown Source)
    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)
    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$11$$anon$1.hasNext(WholeStageCodegenExec.scala:624)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:255)
    at org.apache.spark.sql.execution.SparkPlan$$anonfun$2.apply(SparkPlan.scala:247)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$1$$anonfun$apply$24.apply(RDD.scala:858)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$11.apply(Executor.scala:407)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1408)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:413)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    ... 1 more
ZygD
  • 22,092
  • 39
  • 79
  • 102
jcng2308
  • 21
  • 3
  • `expected zero arguments for construction of ClassDict (for numpy.dtype)` seems useful to debug your UDF – samkart Jul 25 '22 at 08:01
  • there's an answer on [this SO Q](https://stackoverflow.com/questions/38984775/spark-errorexpected-zero-arguments-for-construction-of-classdict-for-numpy-cor) that says you can explicitly/force convert the values to float using `float(sum(dcg))`, because numpy converts numerics to the corresponding NumPy types – samkart Jul 25 '22 at 08:11
  • @samkart spot on sir, I used the exact `float(sum(dcg))` and it worked. Thank you so much – jcng2308 Jul 25 '22 at 08:13

1 Answers1

0

UDFs are slow. The following is how this can be done in native Spark functions, employing higher-order function aggregate.

Input:

relevance_df3 = spark.createDataFrame(
    [('A09029493F', [1, 1, 1, 0, 1, 0, 0, 1, 0, 0], 10),
     ('A09292791U', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 11),
     ('A182C4449C', [0, 0, 0, 1, 0, 0, 0, 2, 1, 0], 10),
     ('A182C82811', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 10),
     ('A182V64925', [0, 0, 0, 0, 0, 0, 0, 0, 1, 0], 10),
     ('A182Z90277', [0, 0, 1, 0, 0, 0, 1, 0, 0, 0], 10),
     ('A18335163I', [1, 0, 1, 1, 0, 0, 1, 0, 0, 2], 10),
     ('A183M37466', [1, 1, 1, 1, 1, 1, 0, 1, 0, 1], 10),
     ('A183Q6318H', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 10),
     ('A183T9483A', [0, 0, 0, 0, 0, 0, 0, 0, 0, 1], 10),
     ('A18418296V', [2, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1], 11),
     ('A18435574D', [0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0], 11),
     ('A184373144', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 12),
     ('A184393490', [0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0], 11),
     ('A18465367H', [1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0], 11),
     ('A18482362F', [1, 1, 1, 1, 1, 0, 1, 1, 2, 1], 10),
     ('A184E8017X', [1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1], 11),
     ('A184H8816G', [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 11),
     ('A184L3021G', [0, 0, 1, 0, 0, 0, 0, 0, 0, 0], 10),
     ('A184N9870U', [0, 1, 1, 1, 0, 0, 2, 1, 0, 1], 10)],
    ['new_party_id', 'collect_list(relevance)', 'filtered_relevance'])

Script:

relevance_df4 = relevance_df3.withColumn(
    'discountedCumulativeGain',
    F.aggregate(
        "collect_list(relevance)",
        F.struct(F.lit(0.0).alias("dcg"), F.lit(2).alias("idx")),
        lambda acc, v: F.struct(
            (acc.dcg + (F.pow(2.0, v) - 1) / F.log2(acc.idx)).alias("dcg"),
            (acc.idx + 1).alias("idx")
        ),
        lambda x: x.dcg
    )
)
relevance_df4.show(truncate=0)
# +------------+------------------------------------+------------------+------------------------+
# |new_party_id|collect_list(relevance)             |filtered_relevance|discountedCumulativeGain|
# +------------+------------------------------------+------------------+------------------------+
# |A09029493F  |[1, 1, 1, 0, 1, 0, 0, 1, 0, 0]      |10                |2.8332474375917283      |
# |A09292791U  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]   |11                |0.0                     |
# |A182C4449C  |[0, 0, 0, 1, 0, 0, 0, 2, 1, 0]      |10                |1.6781011840945603      |
# |A182C82811  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]      |10                |0.0                     |
# |A182V64925  |[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]      |10                |0.30102999566398114     |
# |A182Z90277  |[0, 0, 1, 0, 0, 0, 1, 0, 0, 0]      |10                |0.8333333333333333      |
# |A18335163I  |[1, 0, 1, 1, 0, 0, 1, 0, 0, 2]      |10                |3.13120437036039        |
# |A183M37466  |[1, 1, 1, 1, 1, 1, 0, 1, 0, 1]      |10                |3.909196009091031       |
# |A183Q6318H  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]      |10                |0.0                     |
# |A183T9483A  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 1]      |10                |0.2890648263178878      |
# |A18418296V  |[2, 1, 1, 1, 1, 0, 1, 1, 0, 1, 1]   |11                |6.1652651009674715      |
# |A18435574D  |[0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]   |11                |0.43067655807339306     |
# |A184373144  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]|12                |0.0                     |
# |A184393490  |[0, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0]   |11                |1.2317065537373741      |
# |A18465367H  |[1, 0, 1, 0, 0, 0, 1, 0, 1, 1, 0]   |11                |2.423428155315202       |
# |A18482362F  |[1, 1, 1, 1, 1, 0, 1, 1, 2, 1]      |10                |4.789412142308286       |
# |A184E8017X  |[1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1]   |11                |4.150830219845725       |
# |A184H8816G  |[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]   |11                |0.0                     |
# |A184L3021G  |[0, 0, 1, 0, 0, 0, 0, 0, 0, 0]      |10                |0.5                     |
# |A184N9870U  |[0, 1, 1, 1, 0, 0, 2, 1, 0, 1]      |10                |3.166136014748467       |
# +------------+------------------------------------+------------------+------------------------+
ZygD
  • 22,092
  • 39
  • 79
  • 102