0

Introduction

I want to generate, using Python, around 500'000 HDR images from 500'000 stacks of images, where each stack consists of multiple images taken with different exposure times.

This is easy to do on CPU using the opencv package (see my code below.) However, my CPU only has so many cores, and I would currently need about 6 days of computations to finish the HDR generation for all images. It seems that this task is highly parallelisable and thus I would like to perform the calculations on my (NVidia) graphics card. What is the best way to do this

My code

Here is the code I use which generates HDR images. (I have implemented different algorithms, but I am only interested in Debevec):

import cv2 as cv
from abc import ABC, abstractmethod

class GeneratorHDR(ABC):
    @abstractmethod
    def get_images_and_exposure_times(self, name):
        """
        This function returns the images and the exposure values for each image stack.
        :param name: Name of the image stack.
        :return: Stack of images and list of exposure values for these images.
        """
        pass

    def generate_single_hdr(self, name, type="debevec", gamma_correction=1, tonemapping=False):
        """
        This function generates an HDR image from the batch named name.
        :param name: Name of batch.
        :param type: Which algorithm to choose. Available options are Mertens, Robertson and
        Debevec.
        :param gamma_correction: Do a gamma correction with parameter gammabefore returning the array. If all of the
        pixels have values in the interval [0, b], then each pixel x gets mapped to (x/b)^gamma * b.
        BIG REMARK: It doesn't matter if you rescale and then do a gamma correction or vice versa. Indeed, suppose we
        rescale with factor c. Then we would either have (rescaling second) x -> (x/b)^gamma * b -> (x/b)^gamma * b * c
        or (rescaling first) x -> x * c -> (x*c/(b*c))^gamma * b * c = (x/b)^gamma * b * c. So the result is the same.
        :param tonemapping: Whether to apply tonemapping.
        :return: HDR image.
        """
        images, times = self.get_images_and_exposure_times(name)

        if type.lower() == "mertens":
            merge_mertens = cv.createMergeMertens()
            hdr = merge_mertens.process(images)
        elif type.lower() == "robertson":
            merge_robertson = cv.createMergeRobertson()
            hdr = merge_robertson.process(images, times=times.copy())
        elif type.lower() == "debevec":
            calibrate = cv.createCalibrateDebevec()
            response = calibrate.process(images, times)

            merge_debevec = cv.createMergeDebevec()
            hdr = merge_debevec.process(images, times.copy(), response)
        else:
            raise TypeError("Invalid HDR type")

        if tonemapping:
            tonemap = cv.createTonemap(gamma=2.2)
            hdr = tonemap.process(hdr.copy())

        if gamma_correction != 1:
            # The functionality would be the same without this if statement, but not the performance.
            hdr = (hdr/hdr.max()) ** gamma_correction * hdr.max()

        return hdr

Using Python's multiprocessing package makes it easy to parallelise this task on CPU:

import multiprocessing
def save_hdr_image(batch):
    # do stuff to save HDR image
    pass
pool = multiprocessing.Pool(processes=5)
for _ in pool.imap_unordered(save_hdr_image, all_batches_here):
    pass

What should I do to port the calculations to the GPU?

  • 1
    the time to wait for an answer here and research/implement a good solution will most likely exceed the 6 days you need to run it on your cpu. plus asking for software/ library recommendations, which this basically boils down to, is off-topic. – Piglet May 12 '20 at 13:38
  • @Piglet Is there a Stackexchange community you think might be better suited for this question? – Maximilian Janisch May 12 '20 at 13:42
  • "What should I do to port the calculations to the GPU" -- nothing. You can't. There are hundreds or thousands of LOC in the OpenCV routines you call. You would need to investigate those individually and see what parts of them could potentially be accelerated using the GPU and then write that code. Rinse and repeart. Six days wait sounds like an easy choice. – talonmies May 12 '20 at 13:48
  • @talonmies Ok I guess. But it is in my opinion dissapointing that there is no way to do HDR image generation using a GPU... – Maximilian Janisch May 12 '20 at 14:13
  • Obviously there are ways to do at least part of what you are doing with a GPU. But not by waving a magic wand and turning some extremely high level Python built on top of OpenCV into a GPU program. It doesn't work like that – talonmies May 12 '20 at 14:41
  • @talonmies Well of course I was hoping that somebody else already did it for me with a library (it doesn‘t seem so unusual to me to generate HDR images using a GPU.) But if I understand correctly, what you are saying suggests that this is not the case ? – Maximilian Janisch May 12 '20 at 21:19

0 Answers0