0

we have below record transformer config in our fluentd pipeline:

<filter docker.**>
  @type record_transformer
  enable_ruby true
  <record>
    servername as1
    hostname "#{Socket.gethostname}"
    project xyz
    env prod
    service ${record["docker"]["labels"]["com.docker.compose.service"]}
  </record>
  remove_keys $.docker.container_hostname,$.docker.id, $.docker.image_id,$.docker.labels.com.docker.compose.config-hash, $.docker.labels.com.docker.compose.oneoff, $.docker.labels.com.docker.compose.project, $.docker.labels.com.docker.compose.service
</filter>

we are using S3 plugin to push logs to S3. now we want to save logs on S3 with custome path like ProjectName/ENv/service for this we create S3 output plugin like below:

<store>
@type s3
s3_bucket test
s3_region us-east-1
store_as gzip_command
path logs
s3_object_key_format %{path}/${project}/${env}/${service}/%Y/%m/%d/%{time_slice}_%{index}.%{file_extension}
<buffer tag,time,project,env,service>
type file
path /var/log/td-agent/container-buffer-s3
timekey 300 # 1 minutes
timekey_wait 1m
timekey_use_utc true
chunk_limit_size 256m
</buffer>
time_slice_format %Y%m%d%H
</store>

Unfortunately this is not working for us. getting below warning logs:

{"time":"2021-08-07 17:59:49","level":"warn","message":"chunk key placeholder 'project' not replaced. template:logs/${project}/${env}/${service}/%Y/%m/%d/%{time_slice}_%{index}.gz","worker_id":0}

looking forward for guidance or any suggestions on this.

chitender kumar
  • 394
  • 4
  • 21

1 Answers1

1

this config is correct and its working for us.

<store>
@type s3
s3_bucket test
s3_region us-east-1
store_as gzip_command
path logs
s3_object_key_format %{path}/${project}/${env}/${service}/%Y/%m/%d/%{time_slice}_%{index}.%{file_extension}
<buffer tag,time,project,env,service>
type file
path /var/log/td-agent/container-buffer-s3
timekey 300 # 1 minutes
timekey_wait 1m
timekey_use_utc true
chunk_limit_size 256m
</buffer>
time_slice_format %Y%m%d%H
</store>
chitender kumar
  • 394
  • 4
  • 21