2

I'm facing an issue, and would like some inputs from the community on how to improve the disparity map. I'm following this tutorial for calculating the disparity map between 2 images. The code I have is as follows:

import cv2
import numpy as np
import sys
from matplotlib import pyplot as plt

num_disparities = 64  # number of disparities to check
block = 9  # block size to match

def preprocess_frame(path):
    image = cv2.imread(path, 0)
    image = cv2.equalizeHist(image)
    image = cv2.GaussianBlur(image, (5, 5), 0)
    return image

def calculate_disparity_matrix(args):
    left_image = preprocess_frame(args[1])
    right_image = preprocess_frame(args[2])
    rows, cols = left_image.shape

    kernel = np.ones([block, block]) / block

    disparity_maps = np.zeros(
        [left_image.shape[0], left_image.shape[1], num_disparities])
    for d in range(0, num_disparities):
        # shift image
        translation_matrix = np.float32([[1, 0, d], [0, 1, 0]])
        shifted_image = cv2.warpAffine(
            right_image, translation_matrix,
            (right_image.shape[1], right_image.shape[0]))
        # calculate squared differences
        SAD = abs(np.float32(left_image) - np.float32(shifted_image))
        # convolve with kernel and find SAD at each point
        filtered_image = cv2.filter2D(SAD, -1, kernel)
        disparity_maps[:, :, d] = filtered_image

    disparity = np.argmin(disparity_maps, axis=2)
    disparity = np.uint8(disparity * 255 / num_disparities)
    disparity = cv2.equalizeHist(disparity)
    plt.imshow(disparity, cmap='gray', vmin=0, vmax=255)
    plt.show()


def calculate_disparity_inbuilt(args):
    left_image = preprocess_frame(args[1])
    right_image = preprocess_frame(args[2])
    rows, cols = left_image.shape
    stereo = cv2.StereoBM_create(numDisparities=num_disparities,
                                 blockSize=block)
    disparity = stereo.compute(left_image, right_image)
    plt.imshow(disparity, cmap='gray', vmin=0, vmax=255)
    plt.show()

The problem is that the output that I get from the inbuilt function in OpenCV is hardly similar to the one I've implemented. I was expecting at least a slight similarity between the 2. Is this expected? or am I doing something wrong here?

Implemented Algorithm OpenCV Algorithm

  • 2
    Why would you expect them to look the same? These are completely different algorithms. The stereo block matching algorithm does more than just compute the min L1 distance across the disparities. I mean, they both look like disparity maps. What exactly did you expect and why? – alkasm Feb 04 '20 at 05:51

0 Answers0