1

I am using logstash to ingest elasticsearch. I am using input jdbc, and I am urged by the need to parameterize the inputt jdbc settings, such as the connection string, pass, etc, since I have 10 .conf files where each one has 30 jdbc and 30 output inside.

So, since each file has the same settings, would you like to know if it is possible to do something generic or reference that information from somewhere?

I have this 30 times:...

input {
  # Number 1
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
        jdbc_driver_class => "com.informix.jdbc.IfxDriver"
        jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        schedule => "*/1 * * * *"                    
        statement => "SELECT * FROM public.test ORDER BY id ASC"
        tags => "001"
  }

  # Number 2
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
        jdbc_driver_class => "com.informix.jdbc.IfxDriver"
        jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        schedule => "*/1 * * * *"                    
        statement => "SELECT * FROM public.test2 ORDER BY id ASC"
        tags => "002"
  }


  [.........]

  # Number X
  jdbc {
        jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/ifxjdbc-4.50.3.jar"
        jdbc_driver_class => "com.informix.jdbc.IfxDriver"
        jdbc_connection_string => "jdbc:informix-sqli://xxxxxxx/schema:informixserver=server"
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        schedule => "*/1 * * * *"                    
        statement => "SELECT * FROM public.testx ORDER BY id ASC"
        tags => "00x"
  }

}

filter { 

  mutate { 
    add_field => { "[@metadata][mitags]" => "%{tags}" }
  }

  # Number 1
  if "001" in [@metadata][mitags] {


        mutate { 
                  rename => [ "codigo", "[properties][codigo]" ] 
            }
  }

  # Number 2
  if "002" in [@metadata][mitags] {


        mutate { 
                  rename => [ "codigo", "[properties][codigo]" ] 
            }
  }

  [......]

  # Number x
  if "002" in [@metadata][mitags] {


        mutate { 
                  rename => [ "codigo", "[properties][codigo]" ] 
            }
  }


  mutate {
    remove_field => [ "@version","@timestamp","tags" ]
  }



} 

output {

  # Number 1
  if "001" in [@metadata][mitags] {        
        # Para ELK
        elasticsearch {
              hosts => "localhost:9200"
              index => "001"
              document_type => "001"
              document_id => "%{id}"

              manage_template => true
              template => "/home/user/logstash/templates/001.json"
              template_name => "001"
              template_overwrite => true
        }
  } 

   # Number 2
  if "002" in [@metadata][mitags] {        
        # Para ELK
        elasticsearch {
              hosts => "localhost:9200"
              index => "002"
              document_type => "002"
              document_id => "%{id}"

              manage_template => true
              template => "/home/user/logstash/templates/002.json"
              template_name => "002"
              template_overwrite => true
        }
  }

  [....]

   # Number x
  if "00x" in [@metadata][mitags] {        
        # Para ELK
        elasticsearch {
              hosts => "localhost:9200"
              index => "002"
              document_type => "00x"
              document_id => "%{id}"

              manage_template => true
              template => "/home/user/logstash/templates/00x.json"
              template_name => "00x"
              template_overwrite => true
        }
  }

}
Max
  • 538
  • 1
  • 6
  • 16
  • How are you starting logstash? Command line or service? You have the option of using [environment variables](https://www.elastic.co/guide/en/logstash/current/environment-variables.html) in configs, but I don't think that it would help your case, you would still need one file for each config. Can you share your full config, with filters and output? Maybe you can have your inputs in a folder and use the same output, filtering with tags, which would also be in the index name. – leandrojmp May 21 '20 at 13:50
  • Thanks for responding, for sure! I just modified the script to display it fully, hope you can help me :) – Max May 21 '20 at 14:28
  • I use logstash as a service – Max May 21 '20 at 14:44

1 Answers1

0

You will still need one jdbc input for each query you need to do, but you can improve your filter and output blocks.

In your filter block you are using the field [@metadata][mitags] to filter your inputs but you are applying the same mutate filter to each one of the inputs, if this is the case you don't need the conditionals, the same mutate filter can be applied to all your inputs if you don't filter it.

Your filter block could be resumed to something as this one.

filter {
    mutate { 
        add_field => { "[@metadata][mitags]" => "%{tags}" }
    }
    mutate { 
        rename => [ "codigo", "[properties][codigo]" ] 
    }
    mutate {
        remove_field => [ "@version","@timestamp","tags" ]
    }
}

In your output block you use the tag just to change the index, document_type and template, you don't need to use conditionals to that, you can use the value of the field as a parameter.

output {
    elasticsearch {
        hosts => "localhost:9200"
        index => "%{[@metadata][mitags]}"
        document_type => "%{[@metadata][mitags]}"
        document_id => "%{id}"
        manage_template => true
        template => "/home/unitech/logstash/templates/%{[@metadata][mitags]}.json"
        template_name => "iol-fue"
        template_overwrite => true
    }
}

But this only works if you have a single value in the field [@metadata][mitags], which seems to be the case.

EDIT: Edited just for history reasons, as noted in the comments, the template config does not allow the use of dynamic parameters as it is only loaded when logstash is starting, the other configs works fine.

leandrojmp
  • 7,082
  • 2
  • 19
  • 24
  • unfortunately it doesn't work, it throws me the following error: [2020-05-21T14:21:00,591][ERROR][logstash.outputs.elasticsearch] Invalid setting for elasticsearch output plugin: output { elasticsearch { # This setting must be a path # File does not exist or cannot be opened /home/unitech/logstash/templates/%{[@metadata][miTemplate]}.json template => "/home/unitech/logstash/templates/%{[@metadata][miTemplate]}.json" ... } } – Max May 21 '20 at 17:22
  • Did you create a field named `[@metadata][miTemplate]` ? Which value it receives? You already have the value `00X` in the field `[@metadata][mitags]`, there is no need to create another field. – leandrojmp May 21 '20 at 17:39
  • It happens that tags is of array type apparently, it is typical of jdbc. What I did was create a type and assign it an example value. Anyway try both and the same message. tags => "base001" type => "base001" add_field => { "[@metadata][miType]" => "%{type}" } add_field => { "[@metadata][miTemplate]" => "org_001" } – Max May 21 '20 at 18:11
  • 1
    apparently you can't https://stackoverflow.com/questions/26724871/logstash-dynamically-assign-template – Max May 21 '20 at 18:39
  • Oh, didn't know that. What is the difference between the template for `001` input and for `002` input for example? Looking close, you are also using the same template name for every output, you can have only one template that matchs all your index, just use `["0*"]` as your [`index_patterns`](https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html) – leandrojmp May 21 '20 at 18:52
  • 1
    The templates differ completely in fields and structure, so they need to be different. The name, yes, but only for the example here. In my file that is different. Thanks for everything! I will continue doing it one below the other :) since dynamism is not possible. – Max May 21 '20 at 19:12
  • If you don't have any types conflicts between your fields, like a field being keyword in one and integer in other or being a string in one and json object on other, you can have all your fields in one template to make things simpler. – leandrojmp May 21 '20 at 19:30