8

Let config.json be a small json file :

{
    "toto": 1
}

I made a simple code that read the json file with sc.textFile (because the file can be on S3, local or HDFS, so textFile is convenient)

import org.apache.spark.{SparkContext, SparkConf}

object testAwsSdk {
  def main( args:Array[String] ):Unit = {
    val sparkConf = new SparkConf().setAppName("test-aws-sdk").setMaster("local[*]")
    val sc = new SparkContext(sparkConf)
    val json = sc.textFile("config.json") 
    println(json.collect().mkString("\n"))
  }
}

The SBT file pull only spark-core library

libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "1.5.1" % "compile"
)

the program works as expected, writing the content of config.json on standard output.

Now I want to link also with aws-java-sdk, amazon's sdk to access S3.

libraryDependencies ++= Seq(
  "com.amazonaws" % "aws-java-sdk" % "1.10.30" % "compile",
  "org.apache.spark" %% "spark-core" % "1.5.1" % "compile"
)

Executing the same code, spark throws the following Exception.

Exception in thread "main" com.fasterxml.jackson.databind.JsonMappingException: Could not find creator property with name 'id' (in class org.apache.spark.rdd.RDDOperationScope)
 at [Source: {"id":"0","name":"textFile"}; line: 1, column: 1]
    at com.fasterxml.jackson.databind.JsonMappingException.from(JsonMappingException.java:148)
    at com.fasterxml.jackson.databind.DeserializationContext.mappingException(DeserializationContext.java:843)
    at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.addBeanProps(BeanDeserializerFactory.java:533)
    at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.buildBeanDeserializer(BeanDeserializerFactory.java:220)
    at com.fasterxml.jackson.databind.deser.BeanDeserializerFactory.createBeanDeserializer(BeanDeserializerFactory.java:143)
    at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer2(DeserializerCache.java:409)
    at com.fasterxml.jackson.databind.deser.DeserializerCache._createDeserializer(DeserializerCache.java:358)
    at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCache2(DeserializerCache.java:265)
    at com.fasterxml.jackson.databind.deser.DeserializerCache._createAndCacheValueDeserializer(DeserializerCache.java:245)
    at com.fasterxml.jackson.databind.deser.DeserializerCache.findValueDeserializer(DeserializerCache.java:143)
    at com.fasterxml.jackson.databind.DeserializationContext.findRootValueDeserializer(DeserializationContext.java:439)
    at com.fasterxml.jackson.databind.ObjectMapper._findRootDeserializer(ObjectMapper.java:3666)
    at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3558)
    at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:2578)
    at org.apache.spark.rdd.RDDOperationScope$.fromJson(RDDOperationScope.scala:82)
    at org.apache.spark.rdd.RDDOperationScope$$anonfun$5.apply(RDDOperationScope.scala:133)
    at org.apache.spark.rdd.RDDOperationScope$$anonfun$5.apply(RDDOperationScope.scala:133)
    at scala.Option.map(Option.scala:145)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:133)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.SparkContext.withScope(SparkContext.scala:709)
    at org.apache.spark.SparkContext.hadoopFile(SparkContext.scala:1012)
    at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:827)
    at org.apache.spark.SparkContext$$anonfun$textFile$1.apply(SparkContext.scala:825)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:108)
    at org.apache.spark.SparkContext.withScope(SparkContext.scala:709)
    at org.apache.spark.SparkContext.textFile(SparkContext.scala:825)
    at testAwsSdk$.main(testAwsSdk.scala:11)
    at testAwsSdk.main(testAwsSdk.scala)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)

Reading the stack, it seems that when aws-java-sdk is linked, sc.textFile detects that the file is a json file and try to parse it with jackson assuming a certain format, which it cannot find of course. I need to link with aws-java-sdk, so my questions are:

1- Why adding aws-java-sdk modifies the behavior of spark-core?

2- Is there a work-around (the file can be on HDFS, S3 or local)?

Boris
  • 1,093
  • 2
  • 14
  • 22
  • this is because aws-java-sdk is using latest version 2.5.3 of jackson library and spark is using older 2.4.4. I am facing the same issue but could not resolve it. if you have found the solution please share it. thanks – Hafiz Mujadid Nov 12 '15 at 16:40
  • Hi Hafiz... Pretty anoying isn't it? I send the case to AWS. They confirmed that it is a compatibility issue. They have not told me a clear solution though. Will try to sort it out asap. – Boris Nov 18 '15 at 14:09
  • 1
    Hi Boris! yes this is annoying to face this issue, but i have resolved it by excluding jackson core and jackson module libraries from spark-core and add ing jackson core latest library dependency – Hafiz Mujadid Nov 18 '15 at 16:26
  • @HafizMujadid how did you do it? Could you explain? Thanks. – Barbaros Alp Jul 12 '16 at 16:05

2 Answers2

10

Talked to Amazon support. It is a depency issue with Jackson library. In SBT, override jackson:

libraryDependencies ++= Seq( 
"com.amazonaws" % "aws-java-sdk" % "1.10.30" % "compile",
"org.apache.spark" %% "spark-core" % "1.5.1" % "compile"
) 

dependencyOverrides ++= Set( 
"com.fasterxml.jackson.core" % "jackson-databind" % "2.4.4" 
) 

their answer: We have done this on a Mac, Ec2 (redhat AMI) instance and on EMR (Amazon Linux). 3 Different environments. Root cause of the issue is that sbt builds a dependency graph and then deals with the issue of version conflicts by evicting the older version and picking the latest version of the dependent library. In this case, the spark depends on the 2.4 version of jackson library while the AWS SDK needs 2.5. So there is a version conflict and the sbt evicts spark's dependency version (which is older) and picks the AWS SDK version (which is the latest).

Boris
  • 1,093
  • 2
  • 14
  • 22
1

Adding to Boris' answer, if you don't want to use a fixed version of Jackson (maybe in the future you will upgrade Spark) but still want to discard the one from AWS, you can do the following:

libraryDependencies ++= Seq( 
  "com.amazonaws" % "aws-java-sdk" % "1.10.30" % "compile" excludeAll (
    ExclusionRule("com.fasterxml.jackson.core", "jackson-databind")
  ),
  "org.apache.spark" %% "spark-core" % "1.5.1" % "compile"
) 
Community
  • 1
  • 1
nedim
  • 1,767
  • 1
  • 18
  • 20