0

I'am actually doing a project on Android in wich I'am affronted to calculate or estimate the luminance from the back Camera in real time ( which means, taking the frames from the back camera and have a value of the luminance every time a frame change)! Unfortunately, I cannot acces to the paramater of the camera in real time so I cannot really know who the camera is acting with its paramater when facing the light ( the ISO , the shutter time or the aperture !)

Basically, the camera change its paramater depending on the intensity of light that enters to it, so when calculating the mean of the Grayscale image, for exemple , we have fake values in some cases!! ( for exemple when facing a point light source, everything becames dark aroud because of the auto correction of the camera)

So, I asked myself if there is method that use the values of the pixels of the frames to detect the value of luminance of the frame captured by the camera : A filter,a calculation method,an algorithm , or something like that, that can maps pixel values to lumination values without knowing the parameters of the camera !!

I mean , detecting the light (have a quantitative value ) even if the camera is doing the auto-correction by changing its paramaters (that are unreachable with the current Android API ) from the values of the pixels (because, on my opinion is the only information that we can manipulate to have an information about the luminance!).

This will really help me to realize my project! If something is unclear, let me know ;)

Thank you :)

3arbouch
  • 43
  • 9
  • I can't say for sure its impossible but I would be very surprised if you can. – Hammer Dec 19 '12 at 01:07
  • After reading the comments in the current answer, I'm still not sure you wanna do. Suppose you have a value L (luminance) for each pixel, according to the comments you have trouble calculating the mean based on L because there might be too many dark points. To solve that you can consider only those L above some threshold, thus discarding the dark points. Now, suppose you are not pointing the camera to a light source. These previous observations make no sense then, because the whole scene might be naturally dark and you don't wanna discard it entirely. – mmgp Dec 19 '12 at 02:36
  • I totally agree with you, for that we have to add other paramater for exemple , calculate the standar derivation and the mean together to maybe help us differentiate different cases: For exemple, as you mentioned, in case of a light source, I think that the standard deviation would be in a medium value, and for a total dark area , it will be very low ! So I asked myself if this can help me to differentiate the differente cases ? IN case that I have a medium standard derivation , I know that I'am facing a light source so I use the method of threshold that you mentioned – 3arbouch Dec 19 '12 at 11:59
  • And in case that I have a low standard derivation , it can indicates to me that I'am facing a dark area so that I don't use the threshold and take the real value of L. Tell me if this make sense ! Thank you :) – 3arbouch Dec 19 '12 at 12:05

1 Answers1

1

This is fairly straightforward. You should use a surfaceview and camera preview. That way you get each frame as a byte array. The default format for the frame is YUV, which is Y (luminance) and UV, which are chrominance. You can use Y to calculate average luminance in the image. If you're using BGR, you can calculate luminance using: Y = 0.2126 R + 0.7152 G + 0.0722 B

class Preview extends ViewGroup implements SurfaceHolder.Callback {

SurfaceView mSurfaceView;
SurfaceHolder mHolder;

Preview(Context context) {
    super(context);

    mSurfaceView = new SurfaceView(context);
    addView(mSurfaceView);

    // Install a SurfaceHolder.Callback so we get notified when the
    // underlying surface is created and destroyed.
    mHolder = mSurfaceView.getHolder();
    mHolder.addCallback(this);
    mHolder.setType(SurfaceHolder.SURFACE_TYPE_PUSH_BUFFERS);
}

... }

adding the callback to your class, means for every frame this method is called:

onPreviewFrame(byte [] frame, Camera cam){
   //do some processing
}

OpenCV has some nice functions for doing some of these calculations.

alistair
  • 1,164
  • 10
  • 27
  • I'am also using the OpenCv librairies to help me in image processing! – 3arbouch Dec 18 '12 at 23:11
  • I've edited appropriately. You don't necessarily need OpenCV just to do this calculation. – alistair Dec 18 '12 at 23:16
  • I'am also using the OpenCv librairies to help me in image processing! protected Bitmap processFrame(byte[] data) { mYuv.put(0, 0, data);Core.split(mYuv, listOfYUVChannels); Mat YChannel = listOfYUVChannels.get(0);Core.mean(YChannel); } I have just tried this and I have always have fake values when I point the camera in front of a light source!!! I will upload screen shots if possible ! – 3arbouch Dec 18 '12 at 23:35
  • basically, when I put the camera in front of a light source, all the area that surronds it became very dark so that I have fake values of the mean of the Y Channel!! So, this do not really indicates the value of the luminance because, I'am actually facing a light source which means that I should have big values !! Here is a screnn shot of waht I have: https://www.dropbox.com/s/ag5p1arf71sxm4p/ScreenShot.jpg – 3arbouch Dec 18 '12 at 23:44
  • you're taking a mean of the whole image right? There is a lot of black in that image, which will cancel out the influence of the light source in terns of the mean. – alistair Dec 18 '12 at 23:53
  • Yes I'am taking the mean of the hole image here. The dark area is due to the auto-correction of the camera ! I don't know if I can just take the mean value of just the brillant thing on the image? – 3arbouch Dec 19 '12 at 00:06
  • it depends why you want to find the luminance? You should make sure the camera is not doing white balancing if you want a more accurate result. Can you mark my answer as correct? – alistair Dec 19 '12 at 00:09
  • Yes The camera is doing white balancing, but I can't control it with the current API of Android that is available on the Tablet (Samsung Note 10.1 ). I cannot lock the whiteBalance of the camera in each frame with the current API Level !! In fact , I have to find another method of doing it independently of the auto-correction of the camera, because I do not have access to theses paramaters and cannot modify them !! – 3arbouch Dec 19 '12 at 00:21
  • Finally , your answer is correct but not complete ! This does not really solve the real problem asked in the question and specified in the comments ! – 3arbouch Dec 19 '12 at 00:48