I have trained a convolutional neural network (CNN) to determine/detect if an object of interest is present or not in a given image patch.
Now given a large image, i am trying to locate all occurrences of the object in the image in a sliding window fashion by applying my CNN model to the patch surrounding each pixel in the image. However this is very slow.
The size of my test images is (512 x 512). And, for my caffe net, the test batch size is 1024 and the patch size is (65 x 65 x 1).
I tried to apply my caffe net on a batch of patches (size = test_batch_size) instead of a single patch at a time. Even then it is slow.
Below is my current solution that is very slow. I would appreciate any other suggestions other than down-sampling my test image to speed this up.
Current solution that is very slow:
def detectObjects(net, input_file, output_file):
# read input image
inputImage = plt.imread(input_file)
# get test_batch_size and patch_size used for cnn net
test_batch_size = net.blobs['data'].data.shape[0]
patch_size = net.blobs['data'].data.shape[2]
# collect all patches
w = np.int(patch_size / 2)
num_patches = (inputImage.shape[0] - patch_size) * \
(inputImage.shape[1] - patch_size)
patches = np.zeros((patch_size, patch_size, num_patches))
patch_indices = np.zeros((num_patches, 2), dtype='int64')
count = 0
for i in range(w + 1, inputImage.shape[0] - w):
for j in range(w + 1, inputImage.shape[1] - w):
# store patch center index
patch_indices[count, :] = [i, j]
# store patch
patches[:, :, count] = \
inputImage[(i - w):(i + w + 1), (j - w):(j + w + 1)]
count += 1
print "Extracted %s patches" % num_patches
# Classify patches using cnn and write result to output image
outputImage = np.zeros_like(inputImage)
outputImageFlat = np.ravel(outputImage)
pad_w = test_batch_size - num_patches % test_batch_size
patches = np.pad(patches, ((0, 0), (0, 0), (0, pad_w)),
'constant')
patch_indices = np.pad(patch_indices, ((0, pad_w), (0, 0)),
'constant')
start_time = time.time()
for i in range(0, num_patches, test_batch_size):
# get current batch of patches
cur_pind = patch_indices[i:i + test_batch_size, :]
cur_patches = patches[:, :, i:i + test_batch_size]
cur_patches = np.expand_dims(cur_patches, 0)
cur_patches = np.rollaxis(cur_patches, 3)
# apply cnn on current batch of images
net.blobs['data'].data[...] = cur_patches
output = net.forward()
prob_obj = output['prob'][:, 1]
if i + test_batch_size > num_patches:
# remove padded part
num_valid = num_patches - i
prob_obj = prob_obj[0:num_valid]
cur_pind = cur_pind[0:num_valid, :]
# set output
cur_pind_lin = np.ravel_multi_index((cur_pind[:, 0],
cur_pind[:, 1]),
outputImage.shape)
outputImageFlat[cur_pind_lin] = prob_obj
end_time = time.time()
print 'Took %s seconds' % (end_time - start_time)
# Save output
skimage.io.imsave(output_file, outputImage * 255.0)
I was hoping that with the lines
net.blobs['data'].data[...] = cur_patches
output = net.forward()
caffe would classify all the patches in cur_patches in parallel using GPU. Wonder why it is still slow.