0

I'm using JDBC input plugin to ingest data from mongodb to ElasticSearch. My config is:

`input {
  jdbc {
    jdbc_driver_class => "mongodb.jdbc.MongoDriver"
    jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mongodb_unityjdbc_free.jar"
    jdbc_user => ""
    jdbc_password => ""
    jdbc_connection_string => "jdbc:mongodb://localhost:27017/pritunl"
    schedule => "* * * * *"
    jdbc_page_size => 100000
    jdbc_paging_enabled => true
    statement => "select * from servers_output"
  }
}
filter {
mutate {
copy => { "_id" => "[@metadata][id]"}
remove_field => ["_id"]
}
}
output {

     elasticsearch {
        hosts => "localhost:9200"
        index => "pritunl"
        document_id => "%{[@metadata][_id]}"
}
        stdout {}
}`

In Kibana I see only one hitenter image description here, but in stdout I see many records from mongodb collection. What should I do, to see them all?

1 Answers1

0

The problem is, that all your documents are saved with the same "_id", so even though you're sending different records to ES, only one document is being overwritten internally - thus you get 1 hit in Kibana.

There is a typo in your configuration to cause this issue.

You're copying "_id" into "[@metadata][id]"

But you're trying to read "[@metadata][_id]" with an underscore.

Removing the underscore when reading the value for document_id should fix your issue.

output {
  elasticsearch {
    hosts => "localhost:9200"
    index => "pritunl"
    document_id => "%{[@metadata][id]}"
  }
  stdout {}
}`
Milen Georgiev
  • 502
  • 3
  • 13