0

My model system: an isotropically diffusing particle that undergoes stochastic switching between various diffusion coefficients (D1 <-> D2 <-> D3 <-> ...).

Since the displacements along a trajectory of this hypothetical particle can be modeled as drawn from a Gaussian distribution, it seems natural to use a mixture of Gaussians + model selection in order to extract information about the number of different "states" or coefficients of diffusion present, which would manifest as different components in the mixture.

It seems like there is quite a lot of code out there for performing EM on GMMs where your covariance matrix is unconstrained. In my particular application, however, isotropic diffusion means that my matrix is not only diagonal but all components of the diagonal will be equal for each mixture component, meaning the rate of diffusion is the same in the x,y,z directions.

Can anyone lend guidance as to how the expectation and maximization steps will change in this special case?

Alex Riley
  • 169,130
  • 45
  • 262
  • 238
Zach Barry
  • 33
  • 3

2 Answers2

0

Since EM is iterative, you could whiten the distributions after each iteration. After each iteration, you'll end up with a good isotropic gaussian mixture. It should work normally.

A smarter way would be to use isotropic fitting instead of the regular Gaussian fitting. This might get tricky, and may cause a big increase in computation time since you won't be able to use the MLE.

jmerkow
  • 1,811
  • 3
  • 20
  • 35
0

Well, if you want information on the math itself, this link explains it all in section 6 in the specific case of an isotropic covariance matrix. The formulas are given at the end of page 7.

In a word, the E step is the same. You compute the weights as usual. In the M step, you compute the centers as usual as well, but it's slightly different for the covariance matrix.

This happens because you need to calculate the probability density function in the log likelihood. In the case of an isotropic distribution, the density function can be simplified before you derivative it, which thus yields different result for the covariance matrix.

Stephen Rauch
  • 47,830
  • 31
  • 106
  • 135