I am trying to use Jayaway JSONPATH with my Spark RDD operations. (spark version 1.6.3) JsonPath version 2.0.0 I am getting the following stack trace when running.
java.lang.NoSuchFieldError: defaultReader
at com.jayway.jsonpath.spi.json.JsonSmartJsonProvider.<init>(JsonSmartJsonProvider.java:39)
at com.jayway.jsonpath.internal.DefaultsImpl.jsonProvider(DefaultsImpl.java:21)
at com.jayway.jsonpath.Configuration.defaultConfiguration(Configuration.java:179)
at com.virtualpairprogrammers.JavaIntroduction.returnJsonData(JavaIntroduction.java:115)
at com.virtualpairprogrammers.JavaIntroduction$1.call(JavaIntroduction.java:58)
at com.virtualpairprogrammers.JavaIntroduction$1.call(JavaIntroduction.java:1)
at org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1.apply(JavaPairRDD.scala:1015)
at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)
at scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273)
at scala.collection.AbstractIterator.to(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)
at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)
at scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)
at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1335)
at org.apache.spark.rdd.RDD$$anonfun$take$1$$anonfun$29.apply(RDD.scala:1335)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1857)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1857)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
When i run it in my eclipse , only the json data extraction using the JSONPATH , i am getting the results properly. When i am using Spark submit, i am getting the above error message.
Link read it , but was not that related to my usecase.
If anyone has faced this issue and fixed it please do assist me to move forward.
Following is the Code Snippet , i am using,
JavaRDD<Document> rdd = MongoSpark.load(jsc);
JavaRDD<String> fullFile = rdd.map(new Function<Document, String>() {
public String call(Document s) {
String values="";
Configuration conf1 = Configuration.defaultConfiguration().addOptions(Option.SUPPRESS_EXCEPTIONS);
ReadContext ctx = JsonPath.using(conf1).parse(s.toJson());
for (int i = 0; i < charsetNameArray.length;i++) {
values=values+"|"+ctx.read("$."+charsetNameArray[i]);
}
return values;
}
});
Also i have tried running it Json path from eclipse, it is working fine for me. When running through Spark Submit is when it fails.
Thanks in Advance Regards