0

I am writing Japanese character in the hive table as part of one of my program. Later when i am select that field from Hive i am able to read it but when i reading it from Spark.sql it is not giving me expected result .

spark.sql("select SQL_VAL as sql_val from abc.ac_tbl where d_name='2019-07-09_14:26:16.486' ").show()

+-------+
|sql_val|
+-------+
|      ?|
+-------+

Same table when query from hive it is giving output -

select SQL_VAL as sql_val from abc.ac_tbl where d_name='2019-07-09_14:26:16.486'
sql_val
文
vikrant rana
  • 4,509
  • 6
  • 32
  • 72
KapilD
  • 1
  • 1
  • I was able to read it without any problem. which spark version you are using? refer below link.. it may help you. https://stackoverflow.com/questions/52076651/pyspark-hive-context-read-table-with-utf-8-encoding – vikrant rana Jul 10 '19 at 18:17
  • Hi , I am using 2.3 version of Spark and also i doing my coding in scala not in Pyspark. – KapilD Jul 11 '19 at 16:49

0 Answers0