Interesting problem. Here is a suggestion, which may be quite difficult do implement depending on how much mathematically inclined you are.
Note that I trade space for speed, since what I suggest is likely to be quite heavy computationally (but this is to be tested against real data).
First, use a functional approach. A probability distribution is a probability measure:
struct Distribution
{
virtual ~Distribution() {};
virtual double integrate(std::function<double(double)>) = 0;
};
This way, you abstract away from the samples that you produce, since you do not want to store them. Convince yourself that you can do pretty much anything with an "integrate" method.
Of course, with the explicit samples, you do something like
struct SampledDistribution
{
double integrate(std::function<double(double)> f)
{
double acc = 0;
for (double x: samples) acc += f(samples);
return acc / samples.size();
}
std::deque<double> samples;
};
Now, the storage part:
The usual representation -- a histogram array -- is undesirable mainly
because of the trade-off between quantization/resolution and space. I
imagine there must be a method of representation that adaptively
varies the bin size depending on local "complexity".
The traditional method is wavelets. You produce coefficients with calls to integrate
, which you can serialize. You throw coefficients away if the variance of the integral estimator they produce is high.
Then, to deserialize, you produce a Distribution
object whose integrate
method performs the integration against the wavelets. Integration can be performed with your favorite quadrature method. I stay deliberately vague here because the actual implementation depends on the wavelet family you choose (smooth, compactly supported, orthogonal or not, etc). You will need to dive into the litterature for details anyway.
The point here is that you usually need only very few wavelets to represent a smooth function with few features (say a few peaks, and regularly shaped otherwise), unlike more "regular" finite elements (histograms are a particular kind of finite elements representation). The wavelet representation adapts itself to the features of the transformee, whatever their location or size. You can moreover decide how many coefficients to keep, and hence control the compression ratio.
Also, the 0.001 figure is quite high a number: I suspect you'll need only a handful of coefficients
The tradeoff lies in which wavelet class you use: very smooth distributions will likely be well represented with smooth wavelets, but compactly supported wavelets may be easier to integrate against, etc. Experiment. Note that you don't need "wavelet transform" packages here: only the explicit representation of wavelet functions, and quadrature routines (try Gauss-XXX procedures for reconstruction, or something high order).
I'd favor wavelets which are defined in the Fourier domain (like Lemarie wavelets), since their value and derivatives at zero in Fourier space is known, this allows you to enforce the contraints on the distributions: probability measures must integrate to one, and you may happen to know the expected value or variance beforehand.
Also, you may want to change variables to only deal with functions eg. on [0,1]. There is a copious litterature on wavelets on the interval.