I have a C# object that I am using to index documents to an existing type in Elasticsearch.
The object has a DateTime field that I want to configure the mapping using the the NEST clients attribute based mapping. I am looking to configure the format as DateFormat.basic_date_time_no_millis. I am hopping that this will strip out the milliseconds from the date time object. The class being used as the document to index is below.
public class DBQueueDepth
{
public string QueueName { get; set; }
public int ProcessedCount { get; set; }
public int NotProcessedCount { get; set; }
public string Environment { get; set; }
[Date(Format = DateFormat.basic_date_time_no_millis)]
public DateTime SampleTimeStamp { get; set; }
}
The type being mapped to in Elasticsearch is;
"dbQueueDepth": {
"properties": {
"environment": {
"type": "text"
},
"notProcessedCount": {
"type": "integer"
},
"processedCount": {
"type": "integer"
},
"queueName": {
"type": "text"
},
"sampleTimeStamp": {
"type": "date",
"format": "date_time_no_millis"
}
}
My client code is
var node = new Uri("http://localhost:9200");
var settings =
new ConnectionSettings(node).MaximumRetries(10)
.MaxRetryTimeout(new TimeSpan(0, 0, 0, 10))
.DefaultIndex("austin_operational_data")
.InferMappingFor<DBQueueDepth>(i => i.TypeName("dbQueueDepth"));
ElasticClient client = new ElasticClient(settings);
var response = client.IndexMany(rows);
When I try and index the document I receive the following error.
{index returned 400 _index: operational_data _type: dbQueueDepth _id: AV5rqc3g6arLsAmJzpLK _version: 0 error: Type: mapper_parsing_exception Reason: "failed to parse [sampleTimeStamp]" CausedBy: Type: illegal_argument_exception Reason: "Invalid format: "2017-09-10T12:00:41.9926558Z" is malformed at ".9926558Z""}
Am I correct in expecting the NEST client to strip out the milliseconds based on the formatting in the Date Attribute ?
Is the InferMappingFor method instructing the client to use the formatting designated by attributes on the C# object?