Let there be an event space ES. Let there be some sets of objects OS[]. The probabilities of selecting any object are mutually disjoint.
Now, assume that the size of each set is based on a number X[i] assigned to it. The size of each set rises exponentially with that number.
The base (B) used for exponentiation could be the Euler's number (e), due to its nice properties, but let's assume that, that might not be the case.
Now, we are after calculating the probability of selecting any member of a selected set, at random, while keeping in mind that the arity of each set might be very large.
After the sequence of probabilities is known it's used to compute P[i]*(C).
I wonder if this could be optimized/approximated for very large exponents i.e. computed with low memory consumption i.e. implemented.
Related question I found is here still they seem to tackle only opposite probabilities.
// Numerical example:
// A,C - constants, natural numbers
//exponents
X[1] = 3432342332;
X[2] = 55438849;
X[3] = 34533;
//probabilities
P1 = A^X[1]/(A^X[1]+A^X[2]+A^X[3]);
P2 = A^X[2]/(A^X[1]+A^X[2]+A^X[3]);
P3 = A^X[3]/(A^X[1]+A^X[2]+A^X[3]);
//Results
R1 = P1 *C;
R2 = P2 *C;
R3 = P3 *C;
Excel would fail when exponents are larger than few hundreds.