0

I have a number of Hive files in parquet format that contain both string and double columns. I can read most of them into a Spark Data Frame with sparklyr using the syntax below:

spark_read_parquet(sc, name = "name", path = "path", memory = FALSE)

However, I have one file that I read in where all of the string values get converted to unrecognizable lists that looks like this when collected into an R Data Frame and printed:

s_df <- spark_read_parquet(sc, 
                           name = "s_df", 
                           path = "hdfs://nameservice1/user/hive/warehouse/s_df", 
                           memory = FALSE)
df <- collect(s_df)
head(df)

# A tibble: 11,081 x 13
   provid   hospital_name servcode  servcode_desc codegroup claimid  amountpaid
   <list>   <list>        <list>    <list>        <list>    <list>        <dbl>
 1 <raw [8… <raw [32]>    <raw [5]> <raw [25]>    <raw [29… <raw [1…       7.41
 2 <raw [8… <raw [32]>    <raw [5]> <raw [15]>    <raw [22… <raw [1…       4.93
 3 <raw [8… <raw [32]>    <raw [5]> <raw [28]>    <raw [22… <raw [1…       5.36
 4 <raw [8… <raw [32]>    <raw [5]> <raw [28]>    <raw [30… <raw [1…       5.46
 5 <raw [8… <raw [32]>    <raw [5]> <raw [16]>    <raw [30… <raw [1…       2.80 

The hospital_name for the top 5 rows of df should read METHODIST HOSPITAL OF SOUTHERN CALIFORNIA, but are coming out like this instead:

head(df$hospital_name)

[[1]]
 [1] 48 45 4e 52 59 20 4d 41 59 4f 20 4e 45 57 48 41 4c 4c 20 4d 45 4d 4f 52 49
[26] 41 4c 20 48 4f 53 50

[[2]]
 [1] 48 45 4e 52 59 20 4d 41 59 4f 20 4e 45 57 48 41 4c 4c 20 4d 45 4d 4f 52 49
[26] 41 4c 20 48 4f 53 50

[[3]]
 [1] 48 45 4e 52 59 20 4d 41 59 4f 20 4e 45 57 48 41 4c 4c 20 4d 45 4d 4f 52 49
[26] 41 4c 20 48 4f 53 50

[[4]]
 [1] 48 45 4e 52 59 20 4d 41 59 4f 20 4e 45 57 48 41 4c 4c 20 4d 45 4d 4f 52 49
[26] 41 4c 20 48 4f 53 50

[[5]]
 [1] 48 45 4e 52 59 20 4d 41 59 4f 20 4e 45 57 48 41 4c 4c 20 4d 45 4d 4f 52 49
[26] 41 4c 20 48 4f 53 50

I tried the below solution, but it didn't work:

head(df %>% mutate(hospital_name = as.character(hospital_name)))

[1] "as.raw(c(0x48, 0x45, 0x4e, 0x52, 0x59, 0x20, 0x4d, 0x41, 0x59, 0x4f, 0x20, 0x4e, 0x45, 0x57, 0x48, 0x41, 0x4c, 0x4c, 0x20, 0x4d, 0x45, 0x4d, 0x4f, 0x52, 0x49, 0x41, 0x4c, 0x20, 0x48, 0x4f, 0x53, 0x50))"
[2] "as.raw(c(0x48, 0x45, 0x4e, 0x52, 0x59, 0x20, 0x4d, 0x41, 0x59, 0x4f, 0x20, 0x4e, 0x45, 0x57, 0x48, 0x41, 0x4c, 0x4c, 0x20, 0x4d, 0x45, 0x4d, 0x4f, 0x52, 0x49, 0x41, 0x4c, 0x20, 0x48, 0x4f, 0x53, 0x50))"
[3] "as.raw(c(0x48, 0x45, 0x4e, 0x52, 0x59, 0x20, 0x4d, 0x41, 0x59, 0x4f, 0x20, 0x4e, 0x45, 0x57, 0x48, 0x41, 0x4c, 0x4c, 0x20, 0x4d, 0x45, 0x4d, 0x4f, 0x52, 0x49, 0x41, 0x4c, 0x20, 0x48, 0x4f, 0x53, 0x50))"
[4] "as.raw(c(0x48, 0x45, 0x4e, 0x52, 0x59, 0x20, 0x4d, 0x41, 0x59, 0x4f, 0x20, 0x4e, 0x45, 0x57, 0x48, 0x41, 0x4c, 0x4c, 0x20, 0x4d, 0x45, 0x4d, 0x4f, 0x52, 0x49, 0x41, 0x4c, 0x20, 0x48, 0x4f, 0x53, 0x50))"
[5] "as.raw(c(0x48, 0x45, 0x4e, 0x52, 0x59, 0x20, 0x4d, 0x41, 0x59, 0x4f, 0x20, 0x4e, 0x45, 0x57, 0x48, 0x41, 0x4c, 0x4c, 0x20, 0x4d, 0x45, 0x4d, 0x4f, 0x52, 0x49, 0x41, 0x4c, 0x20, 0x48, 0x4f, 0x53, 0x50))"

I appreciate any help in being able to resolve the issue OR with any suggestions to make my request more clear. Thanks.

bshelt141
  • 1,183
  • 15
  • 31
  • This looks like a string encoding issue. Are you able to read this particular file using something else (e.g., parquet-tools)? – Matt Pollock Mar 21 '18 at 16:23

2 Answers2

1

A reprex would have been nice (just for df) e.g. using dput(head(df)) and pasting the result here. Try the following:

df %>% mutate(hospital_name = unlist(lapply(hospital_name, function(e) rawToChar(e))))
nachti
  • 1,086
  • 7
  • 20
0

To solve the problem set spark.sql.parquet.binaryAsString property for a Spark Session config before reading the parquet file:

sc$config$spark.sql.parquet.binaryAsString = TRUE

REMARK: In my case it turned out that a parquet file which was created as a result of an INSERT in IMPALA contained "character fields" described as "binary" instead of "binary UTF8". In such a scenario another solution is to set PARQUET_ANNOTATE_STRINGS_UTF8 in impala-shell before inserting the data:

> set PARQUET_ANNOTATE_STRINGS_UTF8=1;
PARQUET_ANNOTATE_STRINGS_UTF8 set to 1