I've tried searching for answers, but couldn't find one that exactly match my problem.
I'm doing a stochastic simulator of biological systems, where the outcome is a "Scatter-plot" time series with concentration levels at some random points in time. Now i would like to be able to take the average time-series of multiple simulation runs and are in doubt how to proceed as up to 500 simulation runs, each with several thousands measurements, can be expected.
Naturally, i could "bucket" the intervals probably losing some precision or try to interpolate the missing measurements. But what is the preferred method in my case?
This has to be implemented in Java and i would prefer a citation to a paper that explains the method.
Thanks!