0

i have a use case where i want to redirect my gke container log of two service which has json.key = "abc" to pub/sub without using log router service so i am using fluentd for this. I am able to route logs to pub/sub but filter is not working my flunetd.conf looks like this

<source>
  @type tail
  path /var/log/containers/container-name-*.log
  pos_file /var/log/fluentd-containers.log.pos
  tag test.log
  read_from_head true
  <parse>
    @type none
  </parse>
</source>

<filter test.log>
  @type grep
  <regexp>
    key key
    pattern /abc/
  </regexp>
</filter>

<match test.log>
  @type gcloud_pubsub
  project_id gcp-project-id
  topic gcp-topic
  flush_interval 10s
  num_threads 1
</match>

My logs are in this format {"message":"Rules received","key":"abc"}

Abhinav
  • 71
  • 1
  • 9
  • why do you need fluentd? why don't you just create a sink for container logs in Cloud Logging [link](https://cloud.google.com/logging/docs/export/configure_export_v2)? – Atef Hares May 04 '23 at 09:44
  • We have disabled cluster logging so it will not work , that is why moving to fluentd – Abhinav May 04 '23 at 10:19
  • Why disabling cluster logging? It's better to let the logs and to create exclusion filter in Cloud Logging. In that way you can discard all the cluster logs, except the interesting ones that you sink in a PubSub topic. – guillaume blaquiere May 04 '23 at 20:18
  • @Abhinav: In the parse section, you're using `none` but your logs are in JSON format so you should be using `json` instead. See https://docs.fluentd.org/configuration/parse-section#type and https://docs.fluentd.org/parser/json. – Azeem May 05 '23 at 06:47
  • @Azeem i did that too but still it is not working updated the cm with below `code` ' @type json @type record_transformer key ${record["key"]} @type grep key key pattern /abc/ – Abhinav May 05 '23 at 09:18
  • @Abhinav: Please test only with `tail` and stdout` to verify if you're getting the correct JSON logs. See https://docs.fluentd.org/output/stdout. – Azeem May 05 '23 at 09:36
  • yes , i tested that format was expected but after adding filter i am facing the issue – Abhinav May 05 '23 at 09:44
  • What is the output that you get with `stdout`? – Azeem May 05 '23 at 09:51
  • something like this {"x-request-id":"XX","traceId":"XXX","eventId":"XXXXX","entityType":"XXXX","entityId":"XXXX","mimeType":"application/json","datetime":"XXX","timestampSeconds":1683193027,"timestampNanos":831000000,"severity":"INFO","thread":"XXX","logger":"XXXX","message":"{\"message\":\"Rules received\",\"key\":\"abc\"} – Abhinav May 05 '23 at 10:03
  • Right. So, `key` is there and it's in JSON. Now, the problem is the filtering. Your filter should work with your current `grep` configuration. Did you check if there's some case-sensitivity issue? Given that `abc` is a dummy value, it should match. – Azeem May 05 '23 at 10:12
  • Yes , i did i copy and pasted the key and value both . – Abhinav May 05 '23 at 10:27
  • That JSON is invalid. See https://jqplay.org/s/2XGvPBqsEav. Fixing it shows that the `key` you want to filter is part of the nested JSON. See https://jqplay.org/s/MigdP25I7Um. So, you need to first extract that, normalize it, and then the filtering should work. – Azeem May 05 '23 at 10:32
  • do you have any suggestion or article for this ? – Abhinav May 05 '23 at 10:37
  • It has already been answered. You'll find out multiple threads on "nested JSON` in conjunction with fluentd. See an example here: https://stackoverflow.com/questions/56049210/fluentd-nested-json-parsing. – Azeem May 05 '23 at 10:49
  • @Azeem did the nested too still not getting logs i have configured below `code` @type record_transformer enable_ruby true nested ${record.dig("log", "header", "nested")} remove_keys log @type grep key nested pattern /.*abc.*/ – Abhinav May 15 '23 at 06:15
  • What you're doing is different. Please refer to the above thread that I linked in my comment. See https://stackoverflow.com/questions/56049210/fluentd-nested-json-parsing. You may find other similar threads on SO. – Azeem May 15 '23 at 06:31

0 Answers0