I am working on project where I have around 500 column names, but I need to apply coalesce
function on every table name .
df1
schema
-id
-col1
...
-col500
df2
schema
-id
-col1
...
-col500
Dataset<Row> newDS= df1.join(df2, "id")
.select(
df1.col("id"),
functions.coalesce(df1.col("col1"),df2.col("col1")).as("col1"),
functions.coalesce(df1.col("col2"),df2.col("col2")).as("col2"),
...
functions.coalesce(df1.col("col500"),df2.col("col500")).as("col500"),
)
.show();
What I have tried
Dataset<Row> j1 = df1.join(df2, "id");
Dataset<Row> gh1 = spark.emptyDataFrame();
String[] f = df1.columns();
for(String h : f)
{
if(h == "id")
gh1 = j1.select(df1.col("id"));
else{
gh1 = j1.select(functions.coalesce(df1.col(h),df2.col(h)).as(h));
}
}
gh1.show();