I am very new to openCV and learning as I go.
I am using openCV 4.3, however the following is from 2.4:
"If you use cvtColor with 8-bit images, the conversion will have some information lost. For many applications, this will not be noticeable but it is recommended to use 32-bit images in applications that need the full range of colors or that convert an image before an operation and then convert back."
I am using a 24bit jpg image and applying some minor color correction to the LAB channels before converting back to BGR (similar to the warning in the 2.4 notes).
I load the image with:
//Ask user for filename and load with IMREAD_COLOR
string filename, finalFilename;
cout << "Which would you like to load? " << "\n";
cin >> filename;
cout << "What would you like to call the final image? " << "\n";
cin >> finalFilename;
Mat img = imread(filename, IMREAD_COLOR);
//Convert to CIEL*a*b* format and split for histogram
Mat imgLab;
cvtColor(img, imgLab, COLOR_BGR2Lab);
//Checking type and depth of image to ensure CV_8U (Note: This may have to be converted as to not lose information)
cout << "IMREAD_COLOR Loaded this image with a depth value of " << img.depth() << " and a type value of " << img.type() << "\n";
cout << "cvtColor has changed this image to one with a type value of " << imgLab.type() << "\n\n";
Then I manipulate the channels later on after assigning them to temp variables:
{
for (int j = 0; j < img.cols; j++)
{
modA.at<uint8_t>(i, j) = (float)tempA.at<uint8_t>(i, j) + (0.7f * ((float)mask.at<uint8_t>(i, j))/255 * (128-((float)aBlur.at<uint8_t>(i, j))));
modB.at<uint8_t>(i, j) = (float)tempB.at<uint8_t>(i, j) + (0.7f * ((float)mask.at<uint8_t>(i, j))/255 * (128-((float)bBlur.at<uint8_t>(i, j))));
}
}
Mask is a 1 channel 8 bit matrix that holds values from 0-255. aBlur is tempA with a Gaussian blur applied (same applies to tempB/bBLur).
For some reason, after the conversion, the channels seem to still be skewed from 0-255. (though I could be wrong about this, I noticed that they went above 127 and never below about 100, a bit strange.
I have done a few tests and the type before converting to LAB and after remain the same (CV_8UC3). There is a warning due to the (float) code that there could be information loss:
Severity Code Description Project File Line Suppression State
Warning C4244 '=': conversion from 'float' to '_Tp', possible loss of data OpenCvLearning
My question is: Am I losing information by this process? I noticed my output was not as pleasant as the paper I am attempting to reimplement.
Here is the original, my imp, and their result:
Colours coming out more gray than they should
UPDATE
So I have updated my code to work with float, which now has many more points possible data (2^32). However when polling the data it is still in 8bit (0-255).
I am attempting to use normalize min n max with the old min and max of the 8bit function and scaling to 0-1 for 32 bit. However, I am concerned about scaling back to 8bit without introducing 8bit error (how can I normalize 0-255 in a matrix that doesn't have 0 or 1 in the 32 bit version?)