3

I'm working on a project in which I have to detect Traffic lights (circles obviously). Now I am working with a sample image I picked up from a spot, however after all my efforts I can't get the code to detect the proper circle(light).

Here is the code:-

# import the necessary packages  
import numpy as np  
import cv2

image = cv2.imread('circleTestsmall.png')
output = image.copy()
# Apply Guassian Blur to smooth the image
blur = cv2.GaussianBlur(image,(9,9),0)
gray = cv2.cvtColor(blur, cv2.COLOR_BGR2GRAY)
# detect circles in the image
circles = cv2.HoughCircles(gray, cv2.HOUGH_GRADIENT, 1.2, 200)

# ensure at least some circles were found
if circles is not None:
    # convert the (x, y) coordinates and radius of the circles to integers
    circles = np.round(circles[0, :]).astype("int")

# loop over the (x, y) coordinates and radius of the circles
for (x, y, r) in circles:
    # draw the circle in the output image, then draw a rectangle
    # corresponding to the center of the circle
    cv2.circle(output, (x, y), r, (0, 255, 0), 4)
    cv2.rectangle(output, (x - 5, y - 5), (x + 5, y + 5), (0, 128, 255), -1)

# show the output image
cv2.imshow("output", output)
    cv2.imshow('Blur', blur)
cv2.waitKey(0)

The image in which I want to detect the circle- Image. The circle I wanna detect is highlighted.

This is what the output image is:- Output Image.

I tried playing with the Gaussian blur radius values and the minDist parameter in hough transform but didn't get much of success.
Can anybody point me in the right direction?

P.S- Some out of topic questions but crucial ones to my project-
1. My computer takes about 6-7 seconds to show the final image. Is my code bad or my computer is? My specs are - Intel i3 M350 2.6 GHz(first gen), 6GB RAM, Intel HD Graphics 1000 1625 MB.
2. Will the hough transform work on a binary thresholded image directly?
3. Will this code run fast enough on a Raspberry Pi 3 to be realtime? (I gotta mount it on a moving autonomous robot.)

Thank you!

Mad Physicist
  • 107,652
  • 25
  • 181
  • 264
  • If it takes 6-7 seconds on your desktop for **ONE** picture, how do you expect it to work on the much lighter Raspberry in real time, probably taking 10 pictures a second? So you probably need to optimize. – Jan Mar 29 '16 at 13:20
  • It might be faster on raspberry since you don't need to draw the circles, but still – Jan Mar 29 '16 at 13:21
  • There is nothing wrong with your computer and your code is probably OK. Hough transform takes a long time. If you look at [what it does under the hood](https://en.wikipedia.org/wiki/Hough_transform), it will make sense why. It was never intended to be a realtime filter. And yes, you should apply it to the binary threshold image directly. – Mad Physicist Mar 29 '16 at 13:23
  • @JeD the real purpose of the code is to detect a traffic light in a binary thresholded image, and as soon as the light goes off, send a command to a connected arduino to run the motors. That's it. The robot will be stationary at the time of detection. Moreover can you please suggest some optimizations? –  Mar 29 '16 at 13:30
  • @MadPhysicist Very well, but how do I get it to detect the circle in this image? –  Mar 29 '16 at 13:32
  • @YaddyVirus I have not much experience with Image detection, but off the top off my head:--- 1.)Since the circle is always inside of a black rectangle (the traffic light) it might be easier to find the rectangle first since it is black *and* rectangular.--- 2.) Downsampling the image should be possible, since you don't need a very high resolution.--- 3.)Try figuring out if you can decide where in the image the circle probably is *beforehand* i.e. always at the top or right or something. --- No idea if any of this will work, as I said, very little experience – Jan Mar 29 '16 at 13:36
  • You don't need to detect a circle at all.You are looking for a static blob of either red or green pixels that will disappear/change position once the traffic light switches. Calculate difference images and search for a big change... Don't think too complicated – Piglet Mar 29 '16 at 13:41
  • @JeD what if I threshold the image. Just as it is supposed to be in the real situation? –  Mar 29 '16 at 13:47
  • @YaddyVirus. You should only use the threshold image for Hough. It works much better with a binary image. – Mad Physicist Mar 29 '16 at 13:48
  • @Piglet but what if there is something in the frame which has the same intensity of red as the traffic light? Since the detection has to be in direct sunlight, there is a good chance of this happening. –  Mar 29 '16 at 13:50
  • Also, indent your code properly. What you have here will result in an error. – Mad Physicist Mar 29 '16 at 13:50
  • @MadPhysicist that is confusing... I have an image which after adjusting the min max HSV values shows the object of the required color in white and the rest in black, that's a binary thresholded image right? –  Mar 29 '16 at 13:54
  • @YaddyVirus. A threshold image is a binary image (all zeros and ones). You can get it by applying any number of algorithms to your image. A simple one is `x > blah`. – Mad Physicist Mar 29 '16 at 13:59
  • @MadPhysicist Umm i was using the `cv2.inRange` and giving values from trackbars for my thresholding. Never heard of `x > blah` . –  Mar 29 '16 at 14:02
  • @YaddyVirus. Take a look at this: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html#object-tracking. You should be applying Hough to the mask image (the middle of the three shown) or the original image passed through an edge filter. Look up Sobel or similar filter. – Mad Physicist Mar 29 '16 at 14:06
  • @YaddyVirus the only thing that is similar to a traffic light and that will turn dark or bright over time is a traffic light or something that should not be close to a traffic light. So I guess its perfectly fine to check for a large blob that appears / disappears in a difference image. And as JeD said you will always have some dark frame around a traffic light. At least where I live. – Piglet Mar 29 '16 at 14:34
  • @Piglet okay so instead of finding circles in an thresholded image, find reb blobs of the highest intensity in the frame. Is it? –  Mar 29 '16 at 15:05
  • @YaddyVirus for example. there are countless ways. hough transform without clever parameters is not one of them :) keep it simple and stupid. especially if you don't have much processing power – Piglet Mar 29 '16 at 15:35
  • @Piglet I'm on it dude. Thanks for the help! –  Mar 29 '16 at 15:40

3 Answers3

2

First of all you should restrict your parameters a bit.

Please refer to: http://docs.opencv.org/2.4/modules/imgproc/doc/feature_detection.html#houghcircles

At least set reasonable values for min and max radius. Try to find that one particular circle first. If you succeed increase your radius tolerance.

Hough transform is a brute force method. It will try any possible radius for every edge pixel in the image. That's why it is not very suitable for real time applications. Especially if you do not provide proper parameters and input. You have no radius limits atm. So you will calculate hundreds, if not thousands of circles for every pixel...

In your case the trafficlight also is not very round so the accumulated result won't be very good. Try finding highly saturated, bright, compact blobs of a reasonable size instead. It should be faster and more robust.

You can further reduce processing time if you restrict the image size. I guess you can assume that the traffic light will always be in the upper half of your image. So omit the lower half. Traffic lights will always be green, red or yellow. Remove everything that is not of that color... I think you get what I mean...

Piglet
  • 27,501
  • 3
  • 20
  • 43
  • Umm actually, ulitmatly this code is to be applied on a binary thresholded image, which filters out all the objects that are as red as the Traffic light, so I guess the work load would be less there. Moreover is there any way to calculate the radius of a circle in an image so that I could get an rough estimate of the min and max radius range. What about controlling the hough transform parameters using a trackbar? Could you show me an example for that? I know how to make a trackbar but I don't know how to control function parameters with it... –  Mar 29 '16 at 13:46
  • @YaddyVirus use some image processing tool like ImageJ or Gimp to measure the radius of some typical traffic lights. Its the half width in pixels. Then add a reasonable tolerance so you will always find your traffic light but skip calculations for too small and too large circle candidates. – Piglet Mar 29 '16 at 14:32
2

I think that you should first perform a color segmentation based on the stoplight colors. It will tremendously reduce the ROI. Then you can apply the Hough Transform on the ROI edges only (because you want the contour).

FiReTiTi
  • 5,597
  • 12
  • 30
  • 58
0

Another restriction: Only accept circles where the inside color is homogenous.This would throw out all the false hits in the above example.

Mark
  • 1,374
  • 11
  • 12
  • http://www.pyimagesearch.com/2014/07/07/color-quantization-opencv-using-k-means-clustering/ This website in general is excellent for CV tutorials. – Mark Mar 29 '16 at 17:17
  • That's where I got this tutorial from... :P –  Mar 29 '16 at 17:50