I want to track moving people in real-time by an SoC, so I use simple frame differencing(Image(n)-Image(n-1)) for extracting the foreground objects because of its low computational overhead. After extracting the foreground, matching is used to find the object. The approach works well in the most cases. However, there are two conditions which cause discontinuous edges and fail the matching:
- when people move slowly,
- when people have the color(or intensity, in more accurate way) similar to the background.
I have tried to lower the threshold of frame differencing, but it induces other unwanted edges and thickens edges too much. I also tried dilation and closing. Edges were more continuous, but still not continuous enough for getting the successful matching.
So I am wondering if there is a way of low computational overhead to overcome these discontinuous edges so that the matching can work smoothly? Any suggestions or comments would be immensely appreciated.