0

Background: I am trying to index data from Postgres into elastic search and following below steps:

Step 1 :- Bulk import data from Postgres using logstash jdbc-input plugin. Step 2 :- Synchronize further metadata changes from application using elastic search REST Apis (for CRUD).

At start I created document mapping in elastic search like below :-

   {"metatestsample": {
        "properties": {
           "business_number": {
              "type": "long"
           },
           "business_number_type": {
              "type": "string",
              "index":"not_analyzed"
           },
           "document_id": {
              "type": "long"
           },
           "document_location": {
              "type": "string",
              "index":"not_analyzed"
           },
           "document_number": {
              "type": "string",
              "index":"not_analyzed"
           },
           "document_status": {
              "type": "string",
              "index":  "not_analyzed"
           },
           "country": {
              "type": "string",
              "index":  "not_analyzed"
           },
           "document_created": {
              "type": "date",
              "format": "yyyy-MM-dd'T'HH:mm:ss"
           },
           "customer": {
              "properties": {
                 "customer_id": {
                    "type": "long"
                 },
                 "customer_number": {
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "customer_name": {
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "address1":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "address2":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "city":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "state":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "zip":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "country":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "phone":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "fax":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "email":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "contact_name":{
                    "type": "string",
                    "index":  "not_analyzed"
                 },
                 "customer_created":{
                    "type": "date",
                    "format": "yyyy-MM-dd'T'HH:mm:ss"
                 },
                 "customer_modified":{
                    "type": "date",
                    "format": "yyyy-MM-dd'T'HH:mm:ss"
                 },
                 "type":{
                    "type": "string",
                    "index":  "not_analyzed"
                 }
              }
           },

           "expiration_date": {
              "type": "date",
              "format": "YYYY-MM-DD"
           },
           "legacy_document_id": {
              "type": "string"
           },
           "document_modified": {
              "type": "date",
              "format": "yyyy-MM-dd'T'HH:mm:ss"
           },
           "review_date": {
              "type": "date",
              "format": "YYYY-MM-DD"
           },
           "valid": {
              "type": "boolean"
           },
           "invalid_reason": {
              "type": "string",
              "index":  "not_analyzed"
           }
        }
     } }

And added logstash config for jdbc input plugin :-

  input{
    jdbc {
         # Postgres jdbc connection string to our database, mydb
         jdbc_connection_string => "jdbc:postgresql://localhost:5432/mydb"
         # The user we wish to execute our statement as
         jdbc_user => "user"
         jdbc_password => "password"
         # The path to our downloaded jdbc driver
         jdbc_driver_library => "/data/logstash/postgresql-9.4-1204.jdbc4.jar"
         # The name of the driver class for Postgresql
         jdbc_driver_class => "org.postgresql.Driver"
         jdbc_validate_connection => true
         # our query
         **statement_filepath => "testindex.sql"**
     }
  }

 output{
   elasticsearch{
     action =>"update"
     index => "indexname"
     manage_template => false
     document_id => "%{uid}"
     doc_as_upsert => true
     hosts => ["192.168.56.105"]
   }
 }

Note: statement_filepath => "testindex.sql" testindex.sql has select statement that queries data from postgres

Problem: When I try to insert data directly into elastic search using REST APIs, it works and document gets inserted without any error.

**But when I try to insert documents using above logstash jdbc plugin, it gives me below error. Am I missing any flag or config parameter? **

status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"Mapper for [expiration_date] conflicts with existing mapping in other types:\n[mapper [expiration_date] is used by multiple types. Set update_all_types to true to update [format] across all types.]"}}}, :level=>:warn}
ni3ns
  • 39
  • 7
  • Do you really have an index template named `metatestsample` installed in ES? Please share the output of the following command: `curl -XGET localhost:9200/_template` – Val Mar 04 '16 at 05:55
  • output is { } Thanks, I realized that template_name parameter is not needed and I updated config with parameter manage_template => false but still getting same error – ni3ns Mar 04 '16 at 19:26
  • You probably need to delete your index and recreate it properly, you seem to have conflicting mapping types. The only way out is a clean index/mapping. – Val Mar 04 '16 at 19:54
  • Thanks, It worked fine. – ni3ns Mar 24 '16 at 18:22

0 Answers0