0

I have a known matrix M (square of dimension D) and a parameter vector v (of length D) which is unknown to me and whose posterior distribution I am trying to estimate. My prior on v is that each of its components is standard normal.

For example, let's say that M looks like this:

[[1, -1, -1]
 [1,  0,  2]
 [1,  1, -1]]

My observed data is a of the form R * M * v where R is a "reduction" (non-square) matrix that effectively only allows me to observe some of the components of M * v. For example, R might look like this (keeping the first and second components of M * v):

[[1,  0,  0]
 [0,  1,  0]]

What's the right way to set up a problem like this in STAN?

8one6
  • 13,078
  • 12
  • 62
  • 84
  • A Stan program codes a log density equal to the posterior up to a constant. The standard way to do that is to code the log joint density of the data and parameters, which is equal to the posterior up to a constant by Bayes's rule. The joint log density is usually coded as the log prior plus log likelihood. You need to specify a likelihood for the observations to complete the Bayesian model. You have a prior p(v) and need a likelihood p(R * M * v | M, v) to complete the model. – Bob Carpenter Nov 04 '18 at 21:17
  • `P(R * M * v | M, V) = 1` since all `R` does is select a subset of the dimensions from `M * v`. – 8one6 Nov 05 '18 at 01:16
  • You need a likelihood at least fo the parameters, i.e., `p(v | ...)`. Stan isn't a constraint solver---it's a statistical inference engine, so it needs some probabilistic model in which to operate. – Bob Carpenter Nov 06 '18 at 02:23

0 Answers0