1

I am reading an image from a camera that comes in cv2.COLOR_RGB2BGR format. Below is a temporary work around for what I am trying to achieve:

import cv2
from skimage import transform, io

...
_, img = cam.read()
img = cv2.cvtColor(img, cv2.COLOR_RGB2BGR)
cv2.imwrite("temp.png", img)
img = io.imread("temp.png", as_gray=True)
img = transform.resize(img, (320, 240), mode='symmetric', preserve_range=True)

I found one way to do this conversion from this post, however, it seems that the image data is not the same than if I read the same image from a path?

I've also found from this documentation that I can use img_as_float(cv2_img), but this conversion does not produce the same result as what is returned by io.imread("temp.png", as_gray=True)


What is the proper way to do this conversion efficiently? Should I first convert the image back to RGB then use img_as_float()?

Jeru Luke
  • 20,118
  • 13
  • 80
  • 87
Ietpt123
  • 97
  • 7
  • Your question is missing a lot of necessary details, and it's quite confusing, to be honest. I assume, `_, img = cam.read()` is the OpenCV VideoCapture call!? Then, why do you convert using `cv2.COLOR_RGB2BGR`? `img` already is in BGR color space when using OpenCV functions. Then, you save a color image, but load that image as grayscale!? What conversion do you actually want to achieve? Can you please describe the desired conversion in words? In general: OpenCV images are skimage images, since both libraries use NumPy arrays for image representation. Color space, handling data types vary, yes. – HansHirse Feb 25 '21 at 07:59
  • @HansHirse I referenced in the question that that is the incoming format of the cv2 image I am receiving. The code is for illustration but I do not actually have control over the first 2 lines. From what I understand using io.imread(as_grey) would produce a slightly different image than converting the image to greyscale using cv2. – Ietpt123 Feb 25 '21 at 08:55
  • @HansHirse all I need to do is convert this cv2 bgr image to a skimage that would be indentical as if I first saved the image to disk and then reloaded the image using io.imread(), because again, I read that direct conversion from the methods I found would Yield a similar but slightly different image than writing and then reloading. I’m also asking if I’m wrong about this assumption as well. – Ietpt123 Feb 25 '21 at 08:55

1 Answers1

2

I guess, the basic problem you encounter, are the different luma calculations used by OpenCV and scikit-image:

  • OpenCV uses:
    Y = 0.299 * R + 0.587 * G + 0.114 * B
    
  • scikit-image uses:
    Y = 0.2125 * R + 0.7154 * G + 0.0721 * B
    

Let's have some tests – using the following image for example:

Paddington

import cv2
import numpy as np
from skimage import io

# Assuming we have some kind of "OpenCV image", i.e. BGR color ordering
cv2_bgr = cv2.imread('paddington.png')

# Convert to grayscale
cv2_gray = cv2.cvtColor(cv2_bgr, cv2.COLOR_BGR2GRAY)

# Save BGR image
cv2.imwrite('cv2_bgr.png', cv2_bgr)

# Save grayscale image
cv2.imwrite('cv2_gray.png', cv2_gray)

# Convert to grayscale with custom luma
cv2_custom_luma = np.uint8(0.2125 * cv2_bgr[..., 2] + 0.7154 * cv2_bgr[..., 1] + 0.0721 * cv2_bgr[..., 0])

# Load BGR saved image using scikit-image with as_gray; becomes np.float64
sc_bgr_w = io.imread('cv2_bgr.png', as_gray=True)

# Load grayscale saved image using scikit-image without as_gray; remains np.uint8
sc_gray_wo = io.imread('cv2_gray.png')

# Load grayscale saved image using scikit-image with as_gray; remains np.uint8
sc_gray_w = io.imread('cv2_gray.png', as_gray=True)

# OpenCV grayscale = scikit-image grayscale loaded image without as_gray? Yes.
print('Pixel mismatches:', cv2.countNonZero(cv2.absdiff(cv2_gray, sc_gray_wo)))
# Pixel mismatches: 0

# OpenCV grayscale = scikit-image grayscale loaded image with as_gray? Yes.
print('Pixel mismatches:', cv2.countNonZero(cv2.absdiff(cv2_gray, sc_gray_w)))
# Pixel mismatches: 0

# OpenCV grayscale = scikit-image BGR loaded (and scaled) image with as_gray? No.
print('Pixel mismatches:', cv2.countNonZero(cv2.absdiff(cv2_gray, np.uint8(sc_bgr_w * 255))))
# Pixel mismatches: 131244

# OpenCV grayscale with custom luma = scikit-image BGR loaded (and scaled) image with as_gray? Almost.
print('Pixel mismatches:', cv2.countNonZero(cv2.absdiff(cv2_custom_luma, np.uint8(sc_bgr_w * 255))))
# Pixel mismatches: 1

You see:

  • When opening the grayscale image, scikit-image simply uses the np.uint8 values, regardless of using as_gray=True or not.
  • When opening the color image with as_gray=True, scikit-image applies rgb2gray, scales all values to 0.0 ... 1.0, thus uses np.float64. Even scaling back to 0 ... 255 and np.uint8 yields a lot of pixel mismatches between this image and the OpenCV grayscale image – due to the different luma calculations.
  • When calculating the luma manually and accordingly to rgb2gray, the OpenCV grayscale image is almost identical. The one pixel mismatch might be due to floating point inaccuracies.
----------------------------------------
System information
----------------------------------------
Platform:      Windows-10-10.0.16299-SP0
Python:        3.9.1
NumPy:         1.20.1
OpenCV:        4.5.1
scikit-image:  0.18.1
----------------------------------------
HansHirse
  • 18,010
  • 10
  • 38
  • 67