0

On Databricks

I have a target table which look's like below data = [("0", "null", "female", "USA", True,"insert"), ("1", "1", "male", "USA", True,"insert"), ] tdf = spark.createDataFrame(data, ["Pk", "fk", "c1", "c2", "c3","changetype"]) tdf.write.format("delta").mode("overwrite").save("abfss://wb@something.dfs.core.windows.net/ss/temp_tdf")

I have a source table which looks like below newdata = [ ("0", "1", "female", "USA", True,"insert"), ("0", "2", "female", "USA", True,"insert"), ("0", "3", "female", "USA", True,"insert"), ("0", "4", "female", "USA", True,"insert"), ("1", "1", "male", "India", True,"insert") ] sdf = spark.createDataFrame(newdata, ["Pk", "fk", "c1", "c2", "c3","changetype"]) sdf.write.format("delta").mode("overwrite").save("abfss://wb@something.dfs.core.windows.net/ss/temp_sdf")

Now as I would like the first row in the target table which has "0" and "null" to be replaced with the 4 rows from he source table.

I'm trying to avoid an delete operation and trying to do this one shot in the merge command.

I tried the merge command MERGE INTO vtdf as t using vsdf as s on ( ((t.pk <=> s.pk) and (t.fk <=> s.fk)) OR ((t.pk <=> s.pk) and (t.fk <=> "null")) ) when matched and ((t.pk <=> s.pk) and (t.fk <=> "null")) then delete when matched and (s.changetype = "delete") then delete when matched and (s.changetype != "delete") then update set * when not matched and (s.changetype != "delete") then insert *

I get the error

om.databricks.sql.transaction.tahoe.DeltaUnsupportedOperationException: Cannot perform Merge as multiple source rows matched and attempted to modify the same

is there a way where I can do with without doing a separate delete for "0" and "null"

0 Answers0