I have an application used to put Spark dataframe data into Hive.
The first time, the application use 100 cores and 10 GB of memory producing this OutOfMemory error after leaking a lot of 32 Mb chunks.
After that I run the application with 100 cores and 20GB of memory obtaining a different leak size (64 Mb) followed by the same OutOfMemory error:
Can anyone help me understand this behaviour?