1

I have a one column Spark dataframe:

<class 'pyspark.sql.dataframe.DataFrame'>
StructType(List(StructField(updateDate,TimestampType,true)))

When writing to elasticsearch with spark, the updateDate field is not seen as a date, and is written as a unix timestamp (ms).

def write_to_elastic(table, destination):
    table.write \
      .format("org.elasticsearch.spark.sql") \
      .option("es.mapping.date.rich", "true") \
      .mode("overwrite") \
      .option("es.index.auto.create", "true") \
      .option("es.resource", destination + "/table") \
      .option("es.nodes", ce.es_nodes) \
      .option("es.net.ssl.protocol", "true") \
      .option("es.nodes.wan.only", "true") \
      .option("es.net.http.auth.user", ce.es_user) \
      .option("es.field.read.empty.as.null", "yes") \
      .option("es.net.http.auth.pass", ce.es_password) \
      .save()

Here is the ingested item:

  {
  "test-date": {
    "aliases": {},
    "mappings": {
      "table": {
        "properties": {
          "updateDate": {
            "type": "long"
          }
        }
      }
    },
    "settings": {
      "index": {
        "creation_date": "1517000418516",
        "number_of_shards": "5",
        "number_of_replicas": "1",
        "uuid": "DMYyE1NPTpyE9HuKI29BqA",
        "version": {
          "created": "6010299"
        },
        "provided_name": "test-date"
      }
    }
  }
}

If i write the Spark data frame to a file, the date field is written as: 2017-10-27T00:00:00.000Z.

What could be causing this behavior ?

Chargaff
  • 1,562
  • 2
  • 19
  • 41

0 Answers0