2

I am developing a project of detecting vehicles' headlights in night scene. I am working on a demo on MATLAB. My problem is that I need to find region of interest (ROI) to get low computing requirement. I have researched in many papers and they just use a fixed ROI like this one, the upper part is ignored and the bottom is used to analysed later.

enter image description here

However, if the camera is not stable, I think this approach is inappropriate. I want to find a more flexible one, which alternates in each frame. My experiments images are shown here:

enter image description here enter image description here

If anyone has any idea, plz give me some suggestions.

  • What if you look at the V component of the HSV space? I am guessing that would give you a starting point. – kkuilla Apr 08 '14 at 08:41
  • u mean I should based on the brightness? I have tried but dont know how to distinguish between 2 regions. – user3049831 Apr 10 '14 at 09:04
  • It might be useful if you updated your question with an image where you highlight the regions you want to segment in e.g red manually. – kkuilla Apr 10 '14 at 09:32
  • basically I want to discard the upper darker part and only process the lower brighter part in the image – user3049831 Apr 11 '14 at 09:13
  • You haven't shown what regions you are after. Add two extra images and show the regions you want to identify. It's not that clear what regions you are after and that means you will have less chances of getting a useful answer. – kkuilla Apr 11 '14 at 09:23

2 Answers2

3

I would turn the problem around and say that we are looking for headlights ABOVE a certain line rather than saying that the headlights are below a certain line i.e. the horizon,

Your images have a very high reflection onto the tarmac and we can use that to our advantage. We know that the maximum amount of light in the image is somewhere around the reflection and headlights. We therefore look for the row with the maximum light and use that as our floor. Then look for headlights above this floor.

The idea here is that we look at the profile of the intensities on a row-by-row basis and finding the row with the maximum value.

This will only work with dark images (i.e. night) and where the reflection of the headlights onto the tarmac is large.

It will NOT work with images taking in daylight.

I have written this in Python and OpenCV but I'm sure you can translate it to a language of your choice.

import matplotlib.pylab as pl
import cv2

# Load the image
im = cv2.imread('headlights_at_night2.jpg')

# Convert to grey.
grey_image = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY)

grey_image

Smooth the image heavily to mask out any local peaks or valleys We are trying to smooth the headlights and the reflection so that there will be a nice peak. Ideally, the headlights and the reflection would merge into one area

grey_image = cv2.blur(grey_image, (15,15))

grey_blurred

Sum the intensities row-by-row

intensity_profile = []
for r in range(0, grey_image.shape[0]):
    intensity_profile.append(pl.sum(grey_image[r,:]))

Smooth the profile and convert it to a numpy array for easy handling of the data

window = 10
weights = pl.repeat(1.0, window)/window
profile = pl.convolve(pl.asarray(intensity_profile), weights, 'same')

Find the maximum value of the profile. That represents the y coordinate of the headlights and the reflection area. The heat map on the left show you the distribution. The right graph shows you the total intensity value per row.

We can clearly see that the sum of the intensities has a peak.The y-coordinate is 371 and indicated by a red dot in the heat map and a red dashed line in the graph.

max_value = profile.max()
max_value_location = pl.where(profile==max_value)[0]
horizon =  max_value_location

The blue curve in the right-most figure represents the variable profile

The row where we find the maximum value is our floor. We then know that the headlights are above that line. We also know that most of the upper part of the image will be that of the sky and therefore dark.

I display the result below. I know that the line in both images are on almost the same coordinates but I think that is just a coincidence.

final1 final2

kkuilla
  • 2,226
  • 3
  • 34
  • 37
  • thank you for your thorough answer. It helps me alot. I have 1 more question: How do you calculate the intensity of each row? Did you just add all pixel value of a row? – user3049831 Apr 13 '14 at 11:58
  • Yes. You take each row and sum the intensity for each row. you will then get vector with 1x640 elements (if you have 640 rows). One value for each row. – kkuilla Apr 13 '14 at 15:58
  • 1
    @Benoit_11 Thanks. I used this technique on microscopy images for finding the surface of a dish on which cells resided. It's nice to get acknowledgement from someone with similar background and interests (including ice hockey :-) ). – kkuilla May 26 '15 at 07:58
  • @kkuilla, Could you please explain how did you calculated heatmap ? profile and intensity_profile variables are vectors but the heatmap should come from a 2D matrix. How max_value and horizon values were used to obtain resultant image? – Vendetta Apr 01 '20 at 02:33
  • @Vendetta The heatmap is just the the `grey_image` displayed with a different colourmap. I think I used the jet colourmap. I can't find exactly how I did it but it is very similar to [this](https://stackoverflow.com/a/32427366).. – kkuilla Apr 01 '20 at 13:52
  • @kkuilla, Thank you, I got it. Could you also please suggest, how should I detect headlights in the above image after setting horizontal line as the ROI? – Vendetta Apr 02 '20 at 01:50
  • 1
    @Vendetta You could start with finding the row with the highest intensity. Once you know that, then you know which area of the image the lights are and will give you a starting point. So you could loop through each row of the image, from top to the horizontal line and take e.g. the average of each row. The highest average will tell you which row the headlights may be on. You would have to experiment with the max, mean, median etc to see if any would work. – kkuilla Apr 02 '20 at 21:04
  • @kkuilla, Thank you that is helpful. But I wonder do I need to go through each row of each the image? is there a way to use above technique where you calculated the max. intensity along row, so that I can set a threshold for the rows to be inspected for highest intensity? – Vendetta Apr 03 '20 at 07:24
  • @Vendetta Apologies but I'm not sure I understand. I don't understand ` each row of each the image?` – kkuilla Apr 03 '20 at 08:20
  • Finding the horizontal line is just a pre-processing step, I guess. Another approach would be to calculate the image gradient. The higher the intensity, the higher the gradient. Then you can use the second to get the location of those maximum intensities. – kkuilla Apr 03 '20 at 08:21
  • @kkuilla, that is a typo, I meant the image not each image. Thank you for useful suggestions. By the way, you said "Then you can use the second to get " what is second here? – Vendetta Apr 03 '20 at 11:36
  • @Vendetta `Second` is referring to the second derivative of the image. The first derivative (first order gradient) will give you the peaks. The second (second order gradient) will give you the location of them. It is the same principle as the derivatives studied in school if you studied maths. – kkuilla Apr 06 '20 at 12:08
0

You may try downsampling the image.

dhanushka
  • 10,492
  • 2
  • 37
  • 47
  • what do you mean? I dont understand. – user3049831 Apr 10 '14 at 09:05
  • from your question i understand that you already know how to detect what you want, and your main concern is speed. if the image frames are too large, you may scale them down and process the scaled down version. however, you have to pick the scale such that it does not adversely affect the accuracy – dhanushka Apr 10 '14 at 15:20
  • and, downsampling is not the same as resizing. though i cannot say for sure, it could be faster than interpolation (matlab resizing use interpolation) – dhanushka Apr 10 '14 at 15:42
  • yes you are right. I am concentrating on speed, but my goal is to process in HD image so I need to keep the same size of the image – user3049831 Apr 11 '14 at 09:04
  • you can use the downsampled image frames to detect the regions of interest (ROIs).once you know the ROIs in the scaled down version, you can easily project those to the HD image and extract those regions and process them – dhanushka Apr 11 '14 at 10:23
  • 1
    hope the following example helps if what i'm suggesting is not clear to you.sometime back i had to localize a check in an image of size ~3MP. i downsampled the image and ran my localization algorithm on that, extracted the ROI, projected it to the original ~3MP image and extracted the check. then i did the required processing (like binarization) on the region i extracted from ~3MP image. – dhanushka Apr 11 '14 at 10:35