I have a high intensity model that is written in python, with array calculations involving over 200,000 cells for over 4000 time steps. There are two arrays, one a fine grid array, one a coarser grid mesh, Information from the fine grid array is used to inform the characteristics of the coarse grid mesh. When the program is run, it only uses 1% of the cpu but maxes out the ram (8GB). It takes days to run. What would be the best way to start to solve this problem? Would GPU processing be a good idea or do I need to find a way to offload some of the completed calculations to the HDD?
I am just trying to find avenues of thought to move towards a solution. Is my model just pulling too much data into the ram, resulting in slow calculations?