I have had the problem several times on my production machine, it seems like for whatever reason, the AWS log agent gets desynchronized and starts writing like crazy to the log file the following error messages
2018-09-03 17:51:17,251 - cwlogs.push.reader - WARNING - 18880 - Thread-333 - Fall back to previous event time: {'timestamp': 1535992848000, 'start_position': 12956454L, 'end_position': 12956574L}, previousEventTime: 1535992848000, reason: timestamp could not be parsed from message.
2018-09-03 17:51:17,251 - cwlogs.push.reader - WARNING - 18880 - Thread-333 - Fall back to previous event time: {'timestamp': 1535992848000, 'start_position': 12956574L, 'end_position': 12956705L}, previousEventTime: 1535992848000, reason: timestamp could not be parsed from message.
... At the rythm of 10 per millisecond, so that's 10k log entries per second (ie ~36M lines of code in only one hour, given the line size this is no surprise I was getting several GBs of logs in only a few hours...
Anyone had the same issue and has some explaination / solution to counteract this bug ?
I don't know if this related yet, but some other error cause my DD to reach its inode cap, so I was probably getting a whole bunch of errors on many apps and processes relying on writing new files... would that be enough to turn the awslogs agent crazy ?