0

I am sending logs using fluentd to my coralogix account.

I configured everything and made my td-agent.service running properly and without error as shown in the td-agent.log. However, I can't still find the logs on my account.

Here are the logs from my td-agent.log:

2023-02-04 20:09:08 +0800 [info]: init supervisor logger path=nil rotate_age=nil rotate_size=nil
2023-02-04 20:09:08 +0800 [info]: #0 init worker0 logger path=nil rotate_age=nil rotate_size=nil
2023-02-04 20:09:08 +0800 [info]: adding match pattern="application.log" type="http"
2023-02-04 20:09:08 +0800 [warn]: #0 Use different plugin for secondary. Check the plugin works with primary like secondary_file primary="Fluent::Plugin::HTTPOutput" secondary="Fluent::Plugin::StdoutOutput"
2023-02-04 20:09:08 +0800 [info]: adding source type="tail"
2023-02-04 20:09:08 +0800 [info]: #0 starting fluentd worker pid=5624 ppid=5621 worker=0
2023-02-04 20:09:08 +0800 [info]: #0 following tail of /var/log/Log.log
2023-02-04 20:09:08 +0800 [info]: #0 fluentd worker is now running worker=0

--->Please see my td-agent.conf below for reference:

<source>
    @type tail
    @id tail_var_logs
    @label @CORALOGIX
    read_from_head true
    tag application.log
    path /var/log/Log.log
    pos_file /var/log/td-agent/tmp/coralog.pos
        path_key path
    <parse>
      @type none
    </parse>
  </source>

<label @CORALOGIX>
    <filter application.log>
    @type record_transformer
    @log_level warn
    enable_ruby true
    auto_typecast true
    renew_record true
    <record>
      applicationName "Example_App"
      subsystemName "Example_Subsystem"
      #text ${record.to_json}
    </record>
    </filter>

<match application.log>
    @type http
    endpoint https://api.coralogixsg.com/logs/rest/singles
    headers {"private_key":"<my private key>"}
    retryable_response_codes 503
    error_response_as_unrecoverable false
    <buffer>
      @type memory
      chunk_limit_size 10MB
      compress gzip
      flush_interval 1s
      retry_max_times 5
      retry_type periodic
      retry_wait 2
    </buffer>
    <secondary>
      #If any messages fail to send they will be send to STDOUT for debug.
      @type stdout
    </secondary>
</match>
</label>

--->Please see verbose logs using td-agent -vv:

2023-02-05 08:48:49 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:48:54 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:00 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:05 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:10 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2000
2023-02-05 08:49:16 +0800 [debug]: #0 fluent/log.rb:309:debug: tailing paths: target = /var/log/Log.log | existing = /var/log/Log.log
  • Hi! Please include your relevant fluentd config in your question. Also, add the fluentd startup logs. If you're using any kind of buffering or using the default one, please do mention that as well. Thanks! – Azeem Feb 04 '23 at 15:05
  • Hi @Azeem thank you for your message. I have edited my question post, adding up the my souce and match block config of my td-agent. Thanks – EngineerDegz Feb 04 '23 at 21:55
  • Thanks! Do you see any logs printed on STDOUT? – Azeem Feb 05 '23 at 06:03
  • Hi @Azeem, there’s no error logs in stdout. There’s no manual execution of any script as well by the way. I have observed however, a continuous verbose message that seemingly not normal: 2023-02-05 11:19:38 +0800 [trace]: #0 fluent/log.rb:287:trace: enqueueing all chunks in buffer instance=2020 – EngineerDegz Feb 05 '23 at 07:12
  • HI! What about HTTP logs? Does your `stdout` output plugin shows the HTTP logs? – Azeem Feb 05 '23 at 07:13
  • Right. Instead of intervanl-based buffering, did you try with `immediate` flushing? – Azeem Feb 05 '23 at 07:15
  • For debugging, you can disable (comment) your output plugin config and only test it with `stdout` with immediate flushing. That'll make sure that the input is fine and it messages are what you expect. – Azeem Feb 05 '23 at 07:17
  • Hi @Azeem, noted on this. I’ll check on it and update this thread. – EngineerDegz Feb 05 '23 at 07:18
  • Hi @Azeem, i am not sure how to make fluentd work without output plugin. What I did was, I used file output plugin instead of http, and it was successfully logged to a destination log file that I designated. So i think the input source config is fine. – EngineerDegz Feb 05 '23 at 07:41
  • Right. Sure, if you already have verified it with `file` then it's fine. No need to verify it with `stdout`. Now, the only thing I see is the connectivity issues between the source and destination machines. You need to observe any connectivity-related error logs in the fluentd logs. You may increase the log level for more detailed logs. – Azeem Feb 05 '23 at 07:55
  • Hi @Azeem, I was able to establish the connection to my http endpoint. However, the logs were not yet pushed due to this error: "buffer flush took longer time than slow_flush_log_threshold: " are you familiar with this? – EngineerDegz Feb 06 '23 at 02:49
  • Hi! Good to hear that! That means you need to tweak your output buffering. See https://docs.fluentd.org/configuration/buffer-section and https://docs.fluentd.org/buffer. – Azeem Feb 06 '23 at 04:04

1 Answers1

0

@azeem What did you do to resolve the connection issue to your http endpoint? Encountering similar issues as you - nothing is showing up in the buffer directory, but the server is clearly receiving many logs from a syslog forwarder coming in as syslog. As a result, nothing is going into our destination S3 bucket.

<source>
  @type syslog
    protocol_type tcp
    port 514
    bind 0.0.0.0
    <parse>19   @type regexp
      expression /^(?<message>.*)/
    </parse>
    tag firewall
</source>

<filter Company.name.** >
   @type record_transformer
   <record>
     tag ${tag}
     time ${time}
   </record>
</filter>


<match Company.name.**>
  @type s3
    aws_key_id redacted
    aws_sec_key redacted
    s3_bucket test-bucket
    s3_region us-east-1
    s3_object_key_format %{path}/%{time_slice}_%{index}.%{file_extension}
    include_time_key true
    time_slice_format %Y%m%d
    buffer_type file
    flush_interval 30s
    timekey 5
    timekey_use_utc true
    buffer_path /var/log/td-agent/buffer/s3
    buffer_chunk_limit 5MB
  <store>
    path ${tag[0]}/${tag[2]}/%Y/%m/%d/
  </store>
</match>