I'm trying to load the data from Oracle to Databricks but I encountered a Unicode character issue in PySpark it can't encode the Unicode character as per the format available in Oracle in fact it displays the replacement character as '▯'.In oracle the NLS_NCHAR_NCHARACTERSET=AL16UTF16.
I tried the Inserting national characters into an oracle NCHAR or NVARCHAR column does not work Oracle JDBC system property but it doesn't work in my case. I request you to please provide an alternative to fix this issue.