here is the python code :
image = cv2.imread(img)
gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
image1=np.zeros_like(image)
image1[:,:,0]=gray
image1[:,:,1]=gray
image1[:,:,2]=gray
here is the C++ code:
Mat img1 = imread(fn[i]);
Mat greyMat1,greyMat2,greyMat3;
cvtColor(img1, greyMat1, COLOR_BGR2GRAY);
cvtColor(img1, greyMat2, COLOR_BGR2GRAY);
cvtColor(img1, greyMat3, COLOR_BGR2GRAY);
Mat out;
Mat in[3] = { greyMat1,greyMat2,greyMat3};
merge(in, 3, out);
when I feed my model with C++ grayscale converted images, I can't take the same confidence as Numpy(python) gives me. They must be the same because we are about the convert our model to DLL and all image preprocess must be done in C++ and must be the same with python confidences. How can I apply these python codes in C++ without the change of confidence values ?
Confidence : from opencv documentation
net.setInput(blob);
Mat prob = net.forward();
Point classIdPoint;
double confidence; //**confidence** **********************************
minMaxLoc(prob.reshape(1, 1), 0, &confidence, 0, &classIdPoint);
int classId = classIdPoint.x;