Using Scala and Spark 1.6.3, my error message is:
org.apache.spark.sql.AnalysisException: expression 'id' is neither present in the group by, nor is it an aggregate function.
Add to group by or wrap in first() (or first_value) if you don't care which value you get.;
Code that generates error is:
returnDf.withColumn("colName", max(col("otherCol"))
The DataFrame returnDf looks like:
+---+--------------------+
| id| otherCol|
+---+--------------------+
|1.0|[0.0, 0.217764172...|
|2.0| [0.0, 0.0]|
|3.0|[0.0, 0.142646382...|
|4.0|[0.63245553203367...|
There is a solution to this when using sql syntax. What is an equivalent solution using the syntax that I am using above (i.e. the withColumn()
function)