7

I'm working on small project which requires: Change clothes (shirt/pants etc.) of a person in any 2D image he chooses to upload. So somehow edges needs to be detected and relevant areas are supposed to be filled with new patterns. I do see a lot of other complications, but let's assume simple patterns have to be filled only.

  1. For a web application, is it possible to do it in HTML5? Any other alternatives?

  2. For a standalone application, what kind of technology would be preferred, C++/Java?

Update

Based on Bart's comment:

  1. Any useful pointer like Bart's would be really useful
  2. Assumption: Clear traceable 'standing' human figure in 2d image
  3. Since it's an image, there is no real-time scenario
Bart
  • 19,692
  • 7
  • 68
  • 77
understack
  • 11,212
  • 24
  • 77
  • 100
  • 19
    Even though I would love to give you a complete answer, the question you're asking at the moment is IMHO the wrong one. The topic you're trying to tackle is a very very (did I say very?) difficult one. You might want to look up the work of Fraunhofer HHI on their "virtual mirror" for example. HTML5, C++ or Java are the least of your worries. I would advise you to first look through previous work to get a clear idea of the needed components and how their (and other) working solutions do their job. I feel you should then be able to ask a more targeted question. Sorry, but I hope that helps. – Bart Nov 30 '11 at 11:49
  • 1
    Based on your updates: You could take an approach similar to that of Zugara (and I have seen others as well). Instead of tracking anything, you simply force to user to "fit" within a certain region. Then you overlay your 2D garment images over that. A demo of what I mean (although for a live example) can be found [in this youtube video](http://www.youtube.com/watch?v=RYNYGyB2YFw&feature=related). It would simplify things considerably. – Bart Nov 30 '11 at 15:11

1 Answers1

5

Assumption: Clear traceable 'standing' human figure in 2d image

A way to do this is to require the user to take two pictures. One picture is the one with the user in it, the other picture must be taken in the same camera position and orientation, but the user steps out of the frame for that one.

Since both pictures will have the same background you can compare pixel by pixel between the two images and flag those pixels that have a difference over some threshold. Of course the threshold must be selected so that camera noise isn't detected as a difference. Once you have the collection of pixels that are different you can filter them and calculate an approximate silhouette for the user from the pixels on the edge.

A simplification of the above method can be done if you have control over the background. You could use a bluescreen to avoid having to have a second picture with the background.

Miguel Grinberg
  • 65,299
  • 14
  • 133
  • 152