1

I have deployed my own flink setup in AWS ECS. One Service for JobManager and one Service for task Managers. I am running one ECS task for job manager and 3 ecs tasks for TASK managers.

I have a kind of batch job which I upload using flink rest every-day with changing new arguments, when I submit each time disk memory getting increased by ~ 600MB, I have given a checkpoint as S3 . Also I have set historyserver.archive.clean-expired-jobs true .

Since I am running on ECS, not able to find why the memory is getting increased on every jar upload and execution.

What are the flink config params I should look to make sure the memory is not shooting up on every new job upload?

scoder
  • 2,451
  • 4
  • 30
  • 70

1 Answers1

0

Try this configuration. blob.service.cleanup.interval:

https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#blob-service-cleanup-interval

historyserver.archive.retained-jobs: https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#historyserver-archive-retained-jobs

aromal
  • 1
  • 1
  • Link can broke sometimes. – sukalogika Nov 22 '21 at 05:24
  • While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - [From Review](/review/late-answers/30401284) – Robert Nov 23 '21 at 15:10