0

I was working on inserting indexes to elasticsearch using logstash. My conf looks like this:

input {
    jdbc {
        jdbc_connection_string => "jdbc:mysql://localhost:3306/sample"
        jdbc_user => "root"
        jdbc_password => ""
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mysql-connector-java-5.1.48.jar"
        jdbc_driver_class => " com.mysql.jdbc.Driver"
        tracking_column => "time"
        use_column_value => true
        statement => "SELECT time, firstname, lastname, email FROM sample.sample where time > :sql_last_value;"
        schedule => " * * * * * *"
}
}
output {
    elasticsearch {
document_id=> "%{time}"
    document_type => "doc"
        "hosts" => ["myhost"]
        "index" => "sample" 
    }
stdout{
  codec => rubydebug
  }
}

When I execute this file using logstash, I got query:

SELECT time, firstname, lastname, email FROM sample.sample where time > 351

But the time column is currenttimestamp in MySQL database which I'm tracking, and I think sql_last_value contains the value of this field. Am I doing something incorrect related to sql_last_value?

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
Satnam112
  • 242
  • 1
  • 10

1 Answers1

0

What I was doing wrong: we need to specify tracking_column_type if the tracking_column is not of type int. In my case, tracking_column is of type timestamp. So, I need to add:

tracking_column_type =>"timestamp"

That solved my problem.

Satnam112
  • 242
  • 1
  • 10