9

I have been looking this conversion for a while. What are the ways of converting RGB image to YUV image and accessing Y, U and V channels using Python on Linux? (using opencv, skimage, or etc...)

Update: I used opencv

img_yuv = cv2.cvtColor(image, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)

cv2.imshow('y', y)
cv2.imshow('u', u)
cv2.imshow('v', v)
cv2.waitKey(0)

and got this result but they are all seems gray. Couldn't get an result represented like on the wikipedia page

Am I doing something wrong?

enter image description here

Top-Master
  • 7,611
  • 5
  • 39
  • 71
hma
  • 556
  • 2
  • 11
  • 29
  • What do you want to do after you get a pile of lovely YUV values? What OS are you using? – Mark Setchell May 15 '17 at 15:38
  • I am using linux. trying to decrease the details on the image to use them on machine learning project. I found `yuv=cv2.cvtColor(image, cv2.COLOR_BGR2YUV)` but when I try to access Y channel like this `img_yuv[:,:,0]` or in the other 2 index they all are gray. So I though maybe I am doing something wrong from the beginnig and wanted to learn better – hma May 15 '17 at 15:45
  • 3
    That's because they are single channel images, which `imshow` treats as grayscale. If you want them displayed in false color, such as in the example you mention, you will need to apply a colormap (or LUT) mapping the U and V values to appropriate BGR values which can then be displayed. – Dan Mašek May 15 '17 at 16:28
  • So from my understanding, it is splitted correctly but since they are one dimension I am seeing them in a gray color. Right ? I just want to confirm that, cause I don't know about LUT – hma May 15 '17 at 17:08
  • 3
    Correct. Look at the grass, which is green and it is dark in U and V because both are low. Look at the sky, which is light (high) in U and (low) dark in V because it is blue. – Mark Setchell May 15 '17 at 18:05
  • Thank you DanMašek and Mark Setchell. Have see the things better now. One of you add the explanation as the answer I can accept it. – hma May 15 '17 at 18:22
  • Go @DanMašek ;-) – Mark Setchell May 15 '17 at 18:50

2 Answers2

17

NB: The YUV <-> RGB conversions in OpenCV versions prior to 3.2.0 are buggy! For one, in many cases the order of U and V channels was swapped. As far as I can tell, 2.x is still broken as of 2.4.13.2 release.


The reason they appear grayscale is that in splitting the 3-channel YUV image you created three 1-channel images. Since the data structures that contain the pixels do not store any information about what the values represent, imshow treats any 1-channel image as grayscale for display. Similarly, it would treat any 3-channel image as BGR.

What you see in the Wikipedia example is a false color rendering of the chrominance channels. In order to achieve this, you need to either apply a pre-defined colormap or use a custom look-up table (LUT). This will map the U and V values to appropriate BGR values which can then be displayed.

As it turns out, the colormaps used for the Wikipedia example are rather simple.

Colormap for U channel

Simple progression between green and blue:

colormap_u = np.array([[[i,255-i,0] for i in range(256)]],dtype=np.uint8)

Colormap for U channel

Colormap for V channel

Simple progression between green and red:

colormap_v = np.array([[[0,255-i,i] for i in range(256)]],dtype=np.uint8)

Colormap for V channel

Visualizing YUV Like the Example

Now, we can put it all together, to recreate the example:

import cv2
import numpy as np


def make_lut_u():
    return np.array([[[i,255-i,0] for i in range(256)]],dtype=np.uint8)

def make_lut_v():
    return np.array([[[0,255-i,i] for i in range(256)]],dtype=np.uint8)


img = cv2.imread('shed.png')

img_yuv = cv2.cvtColor(img, cv2.COLOR_BGR2YUV)
y, u, v = cv2.split(img_yuv)

lut_u, lut_v = make_lut_u(), make_lut_v()

# Convert back to BGR so we can apply the LUT and stack the images
y = cv2.cvtColor(y, cv2.COLOR_GRAY2BGR)
u = cv2.cvtColor(u, cv2.COLOR_GRAY2BGR)
v = cv2.cvtColor(v, cv2.COLOR_GRAY2BGR)

u_mapped = cv2.LUT(u, lut_u)
v_mapped = cv2.LUT(v, lut_v)

result = np.vstack([img, y, u_mapped, v_mapped])

cv2.imwrite('shed_combo.png', result)

Result:

Composite of original and Y, U, V channels

Dan Mašek
  • 17,852
  • 6
  • 57
  • 85
1

Using the LUT values as described might be exactly how the Wikipedia article image was made but the description implies it's arbitrary and used maybe because it's simple. It isn't arbitrary; the results essentially match how RGB <-> YUV conversions work. If you are using OpenCV then the methods BGR2YUV and YUV2BGR give the result using the conversion formula found in the same Wikipedia YUV article. (My images generated using Java were slightly darker otherwise the same.)

Addendum: I feel bad that I picked on Dan Mašek after he answered the question perfectly and astutely by showing us the lookup table trick. The author of the Wikipedia YUV article didn't do a bad job depicting the green-blue and green-red gradient shown in the article but as Dan Mašek pointed out it wasn't perfect. The images of color for U and V do somewhat resemble what really happens so I'd call them exaggerated-color and not false-color. The Wikipedia article on YCrCb is similar but different somehow.

// most of the Java program which should work in other languages with OpenCV:
// everything duplicated to do both the U and V at the same time
Mat src = new Mat();
Mat dstA = new Mat();
Mat dstB = new Mat();
src = Imgcodecs.imread("shed.jpg", Imgcodecs.IMREAD_COLOR);

List<Mat> channelsYUVa = new ArrayList<Mat>();
List<Mat> channelsYUVb = new ArrayList<Mat>();

Imgproc.cvtColor(src, dstA, Imgproc.COLOR_BGR2YUV); // convert bgr image to yuv
Imgproc.cvtColor(src, dstB, Imgproc.COLOR_BGR2YUV);

Core.split(dstA, channelsYUVa); // isolate the channels y u v
Core.split(dstB, channelsYUVb);

// zero the 2 channels we do not want to see isolating the 1 channel we want to see
channelsYUVa.set(0, Mat.zeros(channelsYUVa.get(0).rows(),channelsYUVa.get(0).cols(),channelsYUVa.get(0).type()));
channelsYUVa.set(1, Mat.zeros(channelsYUVa.get(0).rows(),channelsYUVa.get(0).cols(),channelsYUVa.get(0).type()));

channelsYUVb.set(0, Mat.zeros(channelsYUVb.get(0).rows(),channelsYUVb.get(0).cols(),channelsYUVb.get(0).type()));
channelsYUVb.set(2, Mat.zeros(channelsYUVb.get(0).rows(),channelsYUVb.get(0).cols(),channelsYUVb.get(0).type()));

Core.merge(channelsYUVa, dstA); // combine channels (two of which are zero)
Core.merge(channelsYUVb, dstB);

Imgproc.cvtColor(dstA, dstA, Imgproc.COLOR_YUV2BGR); // convert to bgr so it can be displayed
Imgproc.cvtColor(dstB, dstB, Imgproc.COLOR_YUV2BGR);

HighGui.imshow("V channel", dstA); // display the image
HighGui.imshow("U channel", dstB);

HighGui.waitKey(0);
Tommy131313
  • 75
  • 1
  • 4