0

I'm trying to index data in elasticsearch using elasticsearch-spark-2.1.0 with spark-1.3.1, but I'm getting the following error:

org.elasticsearch.hadoop.serialization.EsHadoopSerializationException: Cannot handle type [class scala.collection.immutable.Map$Map3] within type [class scala.collection.immutable.Map$Map4], instance [Map(word -> ..., pos -> ...)] within instance [Map(page_title -> ..., full -> ..., tokens -> [Lscala.collection.immutable.Map;@1efb3e9)] using writer [org.elasticsearch.spark.serialization.ScalaValueWriter@200c86fd]

Here is the code where I'm indexing a spark RDD.

val spark = new SparkContext(...)
val filesRDD = spark.wholeTextFiles("hdfs://" + source_dir + "/*", 200)

// val sentenceList: RDD[Map[String, Object with Serializable { .. }]]
val sentenceList = filesRDD.flatMap(file => ...)
  .flatMap { page =>
    page.sentences.map { sentence =>
      Map("page_title" -> page.title,
        "full" -> sentence.map(_.word).mkString(" "),
        "tokens" -> sentence.map { t =>
          Map("word" -> t.word, "pos" -> t.pos)
        }.toArray)
    }
  }

EsSpark.saveToEs(sentenceList, ES_RESOURCE)

Why can't I index a Map within a Map and how can I solve it? Thanks.

kolam
  • 731
  • 4
  • 17
  • No idea why it doesn't work, but as an idea for a different approach: You could try to make a separate class that holds the data. That class should be serializable of course. – JayL Aug 10 '15 at 05:06

1 Answers1

0

I finally solved the problem.

I simply removed the .toArray call in the Map. Seems it can't be parsed by the library.

Resulting Map is:

Map("page_title" -> page.title,
    "full" -> sentence.map(_.word).mkString(" "),
    "tokens" -> sentence.map { t =>
      Map("word" -> t.word, "pos" -> t.pos)
    })
kolam
  • 731
  • 4
  • 17