1

Currently I was working on fitting the morphable model to a given image (2d image to 3d model) but stucked on fitting algorithm.
I have read several papers, most rely on optimization technique which is hard to implement for me,such as SNO (blanz99,blanz03), MFF and so on.
Does anyone have implemented one of the such methods? Can you share the code or something?

two more question:

  1. how to calculate the derivative of the parameters,especially rendering parameters?
  2. which fitting algorithm is good and easy to implement?
genpfault
  • 51,148
  • 11
  • 85
  • 139
user2597823
  • 59
  • 1
  • 4
  • OK, so just a couple of questions here: `1) What is your model? How is it constructed?` For example, Blanz et. al. arrive at a model based on a number of shape + texture coefficients - based on a database of scans. They use this - plus 22 rendering parameters - to fit to probe images. As ever, this is phrased as a cost function and the non-linear optimisation (as you mention: SNO). `2) What are you trying to actually achieve by fitting the model?` Face detection? Recognition? Tracking? Expressions? In other words: is there a temporal, or other scene/variational elements to content with? – timlukins Jul 02 '14 at 10:33
  • Also: what language/platform are you using? Are you constrained to using a particular one? – timlukins Jul 02 '14 at 10:35

1 Answers1

1

Update...

If you're really just after some code - perhaps these specific face fitting/tracking libraries might be more what you are after (although I know they are more tending towards Active Appearance Models...)

I'm afraid that I don't have code for exactly what you're trying (it's been 7 years since I did any of this stuff I things have moved on somewhat. Good luck!)

...

Model issues aside - it sounds to me that you know what you are doing, and just need to express it in a form that you can use with any number of optimisation libraries out there...

I’m sure you probably know all this - but for this particular problem, short of asking the authors themselves for the code, or implementing it yourself, you are left with existing libraries. The issue then is often tailoring your model to work with them (hence my earlier questions).

e.g.

http://www.gnu.org/software/gsl/manual/html_node/Multimin-Algorithms.html

https://software.sandia.gov/opt++/

In turn used by many higher-level libraries, such as...

http://docs.scipy.org/doc/scipy/reference/optimize.html

http://stat.ethz.ch/R-manual/R-devel/library/stats/html/optim.html

http://weka.sourceforge.net/doc.dev/weka/core/Optimization.html

etc.

I think I can anticipate your core problem, which is that many of these ``off-the-shelf'' methods will not deliver the best fitting - since the number of parameters is pretty big and (as you hint at) are hard to factor.

As is so often the case with any form of model-fitting in Computer Vision, key advances are linked to fundamentally better optimisation (algorithm) and representation (data-structure/model).

So you could look at more experimental libraries, as opposed to the established ones above:

http://deeplearning.net/software/pylearn2/

http://ab-initio.mit.edu/wiki/index.php/NLopt

timlukins
  • 2,694
  • 21
  • 35