1

I recently started developing an app on Android studio and I just finished writing the code. The accuracy which I get is more than satisfactory but the time taken by the device is a lot. {}I followed some tutorials on how to monitor the performance on android studio and I saw that one small part of my code is taking 6 seconds, which half the time my app takes to display the entire result. I have seen a lot of posts Java OpenCV - extracting good matches from knnMatch , OpenCV filtering ORB matches on OpenCV/JavaCV but haven't come across anyone asking for this problem. The OpenCV link http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html does provide a good tutorial but the RANSAC function in OpenCV takes different arguments for keypoints as compared to C++.

Here is my code

     public Mat ORB_detection (Mat Scene_image, Mat Object_image){
    /*This function is used to find the reference card in the captured image with the help of
    * the reference card saved in the application
    * Inputs - Captured image (Scene_image), Reference Image (Object_image)*/
    FeatureDetector orb = FeatureDetector.create(FeatureDetector.DYNAMIC_ORB);
    /*1.a Keypoint Detection for Scene Image*/
    //convert input to grayscale
    channels = new ArrayList<Mat>(3);
    Core.split(Scene_image, channels);
    Scene_image = channels.get(0);
    //Sharpen the image
    Scene_image = unsharpMask(Scene_image);
    MatOfKeyPoint keypoint_scene = new MatOfKeyPoint();
    //Convert image to eight bit, unsigned char
    Scene_image.convertTo(Scene_image, CvType.CV_8UC1);
    orb.detect(Scene_image, keypoint_scene);
    channels.clear();

    /*1.b Keypoint Detection for Object image*/
    //convert input to grayscale
    Core.split(Object_image,channels);
    Object_image = channels.get(0);
    channels.clear();
    MatOfKeyPoint keypoint_object = new MatOfKeyPoint();
    Object_image.convertTo(Object_image, CvType.CV_8UC1);
    orb.detect(Object_image, keypoint_object);

    //2. Calculate the descriptors/feature vectors
    //Initialize orb descriptor extractor
    DescriptorExtractor orb_descriptor = DescriptorExtractor.create(DescriptorExtractor.ORB);
    Mat Obj_descriptor = new Mat();
    Mat Scene_descriptor = new Mat();
    orb_descriptor.compute(Object_image, keypoint_object, Obj_descriptor);
    orb_descriptor.compute(Scene_image, keypoint_scene, Scene_descriptor);

    //3. Matching the descriptors using Brute-Force
    DescriptorMatcher brt_frc = DescriptorMatcher.create(DescriptorMatcher.BRUTEFORCE_HAMMING);
    MatOfDMatch matches = new MatOfDMatch();
    brt_frc.match(Obj_descriptor, Scene_descriptor, matches);

    //4. Calculating the max and min distance between Keypoints
    float max_dist = 0,min_dist = 100,dist =0;
    DMatch[] for_calculating;
    for_calculating = matches.toArray();
    for( int i = 0; i < Obj_descriptor.rows(); i++ )
    {   dist = for_calculating[i].distance;
        if( dist < min_dist ) min_dist = dist;
        if( dist > max_dist ) max_dist = dist;
    }

    System.out.print("\nInterval min_dist: " + min_dist + ", max_dist:" + max_dist);
    //-- Use only "good" matches (i.e. whose distance is less than 2.5*min_dist)
    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
    double ratio_dist=2.5;
    ratio_dist = ratio_dist*min_dist;
    int i, iter = matches.toArray().length;
    matches.release();

    for(i = 0;i < iter; i++){
        if (for_calculating[i].distance <=ratio_dist)
            good_matches.addLast(for_calculating[i]);
    }
    System.out.print("\n done Good Matches");

    /*Necessary type conversion for drawing matches
    MatOfDMatch goodMatches = new MatOfDMatch();
    goodMatches.fromList(good_matches);
    Mat matches_scn_obj = new Mat();
    Features2d.drawKeypoints(Object_image, keypoint_object, new Mat(Object_image.rows(), keypoint_object.cols(), keypoint_object.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
    Features2d.drawKeypoints(Scene_image, keypoint_scene, new Mat(Scene_image.rows(), Scene_image.cols(), Scene_image.type()), new Scalar(0.0D, 0.0D, 255.0D), 4);
    Features2d.drawMatches(Object_image, keypoint_object, Scene_image, keypoint_scene, goodMatches, matches_scn_obj);
    SaveImage(matches_scn_obj,"drawing_good_matches.jpg");
    */

    if(good_matches.size() <= 6){
        ph_value = "7";
        System.out.println("Wrong Detection");
        return Scene_image;
    }
    else{
        //5. RANSAC thresholding for finding the optimum homography
        Mat outputImg = new Mat();
        LinkedList<Point> objList = new LinkedList<Point>();
        LinkedList<Point> sceneList = new LinkedList<Point>();

        List<org.opencv.core.KeyPoint> keypoints_objectList = keypoint_object.toList();
        List<org.opencv.core.KeyPoint> keypoints_sceneList = keypoint_scene.toList();

        //getting the object and scene points from good matches
        for(i = 0; i<good_matches.size(); i++){
            objList.addLast(keypoints_objectList.get(good_matches.get(i).queryIdx).pt);
            sceneList.addLast(keypoints_sceneList.get(good_matches.get(i).trainIdx).pt);
        }
        good_matches.clear();
        MatOfPoint2f obj = new MatOfPoint2f();
        obj.fromList(objList);
        objList.clear();

        MatOfPoint2f scene = new MatOfPoint2f();
        scene.fromList(sceneList);
        sceneList.clear();

        float RANSAC_dist=(float)2.0;
        Mat hg = Calib3d.findHomography(obj, scene, Calib3d.RANSAC, RANSAC_dist);

        for(i = 0;i<hg.cols();i++) {
            String tmp = "";
            for ( int j = 0; j < hg.rows(); j++) {

                Point val = new Point(hg.get(j, i));
                tmp= tmp + val.x + " ";
            }
        }

        Mat scene_image_transformed_color = new Mat();
        Imgproc.warpPerspective(original_image, scene_image_transformed_color, hg, Object_image.size(), Imgproc.WARP_INVERSE_MAP);
        processing(scene_image_transformed_color, template_match);

        return outputImg;
    }
} }

and this part is what is taking 6 seconds to implement on runtime -

    LinkedList<DMatch> good_matches = new LinkedList<DMatch>();
    double ratio_dist=2.5;
    ratio_dist = ratio_dist*min_dist;
    int i, iter = matches.toArray().length;
    matches.release();

    for(i = 0;i < iter; i++){
        if (for_calculating[i].distance <=ratio_dist)
            good_matches.addLast(for_calculating[i]);
    }
    System.out.print("\n done Good Matches");}

I was thinking may be I can write this part of the code in C++ using NDK but I just wanted to be sure that the language is the problem and not the code itself. Please don't be strict, first question! Any criticism is much appreciated!

Community
  • 1
  • 1
Rick M.
  • 3,045
  • 1
  • 21
  • 39
  • 1
    brute force matching is `O(n^2)` so either you use a faster matching method (FLANN) or you can reduce the number of keypoints. But feature extraction itself might be quite expensive. I doubt that switching to C++ will give benefits for the opencv functions (probably they start some C binary?), but I never tried it... – Micka Aug 19 '16 at 15:17
  • how many keypoints do you extract on your images? – Micka Aug 19 '16 at 15:18
  • I would say it's more likely to be the image resolution in the detection part. The default number of features for ORB in OpenCV is 500, and I did brute force matching on a Nexus 4 in 2012 in 1ms, for live tracking. – Photon Aug 21 '16 at 11:39
  • @micka I extract 500 keypoints, not that i can change it in Java though. But Brute force doesn't seem to slow the process as said. The Filtering of the matches, which I show above is what slows it down. There is suddenly a shoot in Memory consumption. I can add more details if you want. I will write everything in C++ just in case today and report back – Rick M. Aug 22 '16 at 10:57
  • sounds like matching isnt the problem but probably the detection step. Should be easy to find out which parts of the code are slow... – Micka Aug 22 '16 at 11:03
  • @Photon The image resolution is 1024*1632 and I'm using it on a Nexus 5. The descriptor extraction, matching takes about 1ms. – Rick M. Aug 22 '16 at 11:04
  • @Micka any specific rules to follow in Java to save memory except calling gc? Unlike C++, I can't clear memory without it and I have seen a lot of posts which say explicitly calling gc is bad practice – Rick M. Aug 22 '16 at 11:06
  • not sure about the java api but in c++ openCV you can reuse Mat objects/memory of equal size instead of creating new ones... – Micka Aug 22 '16 at 11:08
  • I see that when I debug it, it is pretty fast. When I run the app, its slow. This is unexpected. – Rick M. Aug 22 '16 at 11:55
  • 1
    the issue is the size of the resolution. If i decrease the resolution then the code is fast but the detection is not the best. – Rick M. Aug 22 '16 at 14:05

1 Answers1

1

So the problem was the logcat was giving me false timing results. The lag was due to a Huge Gaussian Blur later on in the code. Instead of System.out.print, I used System.currentTimeMillis, which showed me the bug.

Rick M.
  • 3,045
  • 1
  • 21
  • 39
  • could you explain into more detail? im interested. – Gewure Feb 16 '20 at 17:45
  • @Gewure In my case the Gaussian Blur was taking a long time to implement w.r.t. the other part of my code. – Rick M. Feb 17 '20 at 08:21
  • i still don't understand fully ;_) you had a gaussian blur filter.. and how did that relate to other parts of your code? did you call it multiple times, or what took so long? – Gewure Feb 17 '20 at 16:14
  • 1
    The blurring itself took long. The rest of the code was functioning as expected. The issue was that after implementing this, I didn't expect the gaussian blur to be slow in any way whatsoever, so I completely disregarded it. – Rick M. Feb 17 '20 at 18:14
  • 1
    @Gewure Glad that I could help. Did you also run into something similar? – Rick M. Feb 19 '20 at 12:31
  • no, not yet :) i am currently evaluating keypoint-matching functionality & librariers for a mobile project; so i'm reading problems people had and posted on SO to educate myself about it. im tending to use openCV for it currently. – Gewure Feb 19 '20 at 15:50
  • if i may ask.. is your project publicly available? i'd love to take a look at how you did the apps architecture including openCV. – Gewure Feb 19 '20 at 15:55
  • I should have a back-up on my DropBox, I'd need to check. But you can refer to this paper, I am the second author. https://www.researchgate.net/publication/309917081_Smartphone-based_urine_strip_analysis – Rick M. Feb 21 '20 at 08:24