Amazon EMR, Apache Spark 2.3, Apache Kafka, ~10 mln records per day.
Apache Spark used for processing events in batches by 5 minutes, once per day worker nodes are dying and AWS reprovision automatically the nodes. On reviewing the log messages it looks like no space in the nodes, but they are having about 1Tb storage there.
Did someone has the issues with storage space in cases when it should be more than enough?
I was thinking the log aggregation could not copy properly the logs to s3 bucket, that should be done automatically by spark process as I see.
What kind of the information should I provide to help to resolve this issue?
Thank you in advance!