0

I'm trying to detect the status of an LED light using an android phone's camera. The camera app is using the onpreviewframe method to take each frame and using the luma part of the YUV file (each frame taken is saved in a buffer as a YUV file, the first width*height bytes of YUV are the luma part and include the grayscale of the corresponding pixels, I work with those bytes to make a decision).

Currently with each frame I calculate the grayscale mean and compare it with a threshold number to decide if the bit is an 0 or a 1; I'm creating the threshold number by getting the grayscale mean of the first 50 frames. This algorithm is too simple and weak to hold in different situations and I'm looking to make it more robust.

I have seen a few questions regarding my problem (**) but non of them asks for an actual algorithm which is what I'm looking for as I am new to image processing.

My questions -

  1. What algorithms should I implement for a better decision making?

  2. How (if needed) to create a more precise threshold number?

  3. Are there any sources I can use (preferably in java)?

*The LED is fixed and the phone is handheld.

**OpenCV: Detect blinking lights in a video feed & openCV detect blinking lights

Community
  • 1
  • 1
Voly
  • 145
  • 4
  • 16
  • You're leaving out a lot of probably relevant info. Are both the LED and camera fixed in place? If so, why not just use the same approach as you currently use, except considering only a small rectangle around the LED instead of the entire frame. If you want to correct for changing ambient light levels, you could also measure the average intensity of some rectangle that does *not* contain the LED, and divide the average intensity of the LED-containing rectangle by that. – j_random_hacker Apr 25 '16 at 13:18
  • You're right I forgot to mention it, the LED is fixed but the phone is handheld, the transmission is somewhat short (5 secs) so there won't be much changes in the ambient lights, basically most of the changes are caused by the hand movement. – Voly Apr 25 '16 at 13:28
  • Then I expect you will need all kinds of object recognition/tracking stuff, I'm afraid. – j_random_hacker Apr 25 '16 at 13:30
  • Not necessarily, It doesn't really matter for me that the light changes its place because I work on each frame separately, all there is to do is to check for an exceptionally bright area anywhere on the frame and I can deduce the LED is on, which is what i'm currently doing but not so well. I'm looking for a way (an algorithm) to make a better decision making. That being said, I guess tracking the LED to reduce the search area will help but I'm not quite sure it's the best way to go.. – Voly Apr 25 '16 at 13:40
  • If the camera will always be (roughly) the same *distance* from the LED, then it should always be around the same, known size. In that case you could measure *every* rectangle of the appropriate size in the image, and compare its average intensity to the average intensity of the rest of the image. Remember the largest difference in average intensities. If a bright LED is present somewhere in the frame, you should find (at least one) rectangle that produces a large difference; otherwise not. – j_random_hacker Apr 25 '16 at 13:49
  • The camera will be roughly at the same only for the one recording as the user won't be moving too much while using the app but for different users the distance will change which mean I can't relay on a fixed distance, although I might be able to work with that idea by starting with a large rectangle and getting it smaller and smaller until I find the LED (or not) but this could take a lot of processing time. BTW thank you for commenting and helping. – Voly Apr 25 '16 at 14:03
  • You're welcome :) Hope you come up with something that works! – j_random_hacker Apr 25 '16 at 15:06

0 Answers0