I'm working on Multiple image stitching in Visual Studio 2012, C++. I've modified stitching_detailed.cpp according to my requirement and it gives quality results. The problem here is, it takes too much time to execute. For 10 images, it takes around 110 seconds.
Here's where it takes most of the time:
1) Pairwise matching - Takes 55 seconds for 10 images! I'm using ORB to find feature points. Here's the code:
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);
matcher(features, pairwise_matches);
matcher.collectGarbage();
I tried using this code, as I already know the sequence of images:
vector<MatchesInfo> pairwise_matches;
BestOf2NearestMatcher matcher(false, 0.35);
Mat matchMask(features.size(),features.size(),CV_8U,Scalar(0));
for (int i = 0; i < num_images -1; ++i)
matchMask.at<char>(i,i+1) =1;
matcher(features, pairwise_matches, matchMask);
matcher.collectGarbage();
It definitely reduces the time (18 seconds), but does not produce required results. Only 6 images get stitched (last 4 are left out because image 6 and image 7 feature points somehow don't match. And so the loop breaks.)
2) Compositing - Takes 38 seconds for 10 images! Here's the code:
for (int img_idx = 0; img_idx < num_images; ++img_idx)
{
printf("Compositing image #%d\n",indices[img_idx]+1);
// Read image and resize it if necessary
full_img = imread(img_names[img_idx]);
Mat K;
cameras[img_idx].K().convertTo(K, CV_32F);
// Warp the current image
warper->warp(full_img, K, cameras[img_idx].R, INTER_LINEAR, BORDER_REFLECT, img_warped);
// Warp the current image mask
mask.create(full_img.size(), CV_8U);
mask.setTo(Scalar::all(255));
warper->warp(mask, K, cameras[img_idx].R, INTER_NEAREST, BORDER_CONSTANT, mask_warped);
// Compensate exposure
compensator->apply(img_idx, corners[img_idx], img_warped, mask_warped);
img_warped.convertTo(img_warped_s, CV_16S);
img_warped.release();
full_img.release();
mask.release();
dilate(masks_warped[img_idx], dilated_mask, Mat());
resize(dilated_mask, seam_mask, mask_warped.size());
mask_warped = seam_mask & mask_warped;
// Blend the current image
blender->feed(img_warped_s, mask_warped, corners[img_idx]);
}
Mat result, result_mask;
blender->blend(result, result_mask);
The original image resolution is 4160*3120. I'm not using compression in compositing because it reduces quality. I've used compressed images in rest of the code.
As you can see I've modified the code and reduced time. But I still want to reduce time as much as possible.
3) Finding Feature points - with ORB. Takes 10 seconds for 10 images. Finds 1530 feature points to the max for an image.
55 + 38 + 10 = 103 + 7 for the rest of the code = 110.
When I used this code in android, it takes almost whole memory(RAM) of smart-phone to execute. How can I reduce time as well as memory consumption for android device? (Android device I used has 2 GB RAM)
I've already optimized the rest of the code. Any help is much appreciated!
EDIT 1: I used image compression in the compositing step and the time got reduced from 38 seconds to 16 seconds. I also managed to reduce time in the rest of the code.
So now, from 110 -> 85 seconds. Help me reduce time for pairwise matching; I've no clue on reducing it!
EDIT 2: I found the code of pairwise matching in matchers.cpp. I created my own function in the main code to optimize the time. For compositing step, I used compression until the final image doesn't lose clarity. For feature finding, I used image scaling to find image features at reduced image scale. Now I am able to stitch upto 50 images easily.