1

I'm trying to detect yellow objects. I perform color segmentation in HSV color scheme, threshold to the yellow range using cvInRange, which returns a binary thresholded mask with the region detected shown in white, while other colors are ignored and blacked out. I thought that obtaining the edges would not only reduce the computation for findContour() and make changing edge planes more obvious. Hence instead of doing:

    binary thresholded image -> findContour()

I did:

    binary thresholded image -> Canny() -> findContour() instead.

See below for Code + Attached Pics of Image Frame Output displayed.

public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) {

     InputFrame = inputFrame.rgba();

     Core.transpose(InputFrame,mat1); //transpose mat1(src) to mat2(dst), sorta like a Clone!
     Imgproc.resize(mat1,mat2,InputFrame.size(),0,0,0);    // params:(Mat src, Mat dst, Size dsize, fx, fy, interpolation)   Extract the dimensions of the new Screen Orientation, obtain the new orientation's surface width & height.  Try to resize to fit to screen.
     Core.flip(mat2,InputFrame,-1);   // mat3 now get updated, no longer is the Origi inputFrame.rgba BUT RATHER the transposed, resized, flipped version of inputFrame.rgba().

     int rowWidth = InputFrame.rows();
     int colWidth = InputFrame.cols();

     Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGBA2RGB);
     Imgproc.cvtColor(InputFrame,InputFrame,Imgproc.COLOR_RGB2HSV);


 //============= binary threshold image to Yellow mask ============
     Lower_Yellow = new Scalar(21,150,150);    //HSV color scale  H to adjust color, S to control color variation, V is indicator of amt of light required to be shine on object to be seen.
     Upper_Yellow = new Scalar(31,255,360);    //HSV color scale

     Core.inRange(InputFrame,Lower_Yellow, Upper_Yellow, maskForYellow);


 //============== Apply Morphology to remove noise ===================
     final Size kernelSize = new Size(5, 5);  //must be odd num size & greater than 1.
     final Point anchor = new Point(-1, -1);   //default (-1,-1) means that the anchor is at the center of the structuring element.
     final int iterations = 1;   //number of times dilation is applied.  https://docs.opencv.org/3.4/d4/d76/tutorial_js_morphological_ops.html

     Mat kernel = Imgproc.getStructuringElement(Imgproc.MORPH_ELLIPSE, kernelSize);

     Imgproc.morphologyEx(maskForYellow, yellowMaskMorphed, Imgproc.MORPH_CLOSE, kernel, anchor, iterations);   //dilate first to remove then erode.  White regions becomes more pronounced, erode away black regions


 //=========== Apply Canny to obtain edge detection ==============
     Mat mIntermediateMat = new Mat();
     Imgproc.GaussianBlur(yellowMaskMorphed,mIntermediateMat,new Size(9,9),0,0);   //better result than kernel size (3,3, maybe cos reference area wider, bigger, can decide better whether inrange / out of range.
     Imgproc.Canny(mIntermediateMat, mIntermediateMat, 5, 120);   //try adjust threshold   //https://stackoverflow.com/questions/25125670/best-value-for-threshold-in-canny


 //============ apply findContour()==================
     List<MatOfPoint> contours = new ArrayList<>();
     Mat hierarchy = new Mat();
     Imgproc.findContours(mIntermediateMat, contours, hierarchy, Imgproc.RETR_EXTERNAL, Imgproc.CHAIN_APPROX_SIMPLE, new Point(0, 0));   


 //===========  Use contourArea to find LargestBlob contour ===============
     double maxArea1 = 0;
     int maxAreaIndex1 = 0;
     //MatOfPoint max_contours = new MatOfPoint();
     Rect r = null;
     ArrayList<Rect> rect_array = new ArrayList<Rect>();

     for(int i=0; i < contours.size(); i++) {
         //if(Imgproc.contourArea(contours.get(i)) > 300) {   //Size of Mat contour @ that particular point in ArrayList of Points.
         double contourArea1 = Imgproc.contourArea(contours.get(i));    
        //Size of Mat contour @ that particular point in ArrayList of Points.
             if (maxArea1 < contourArea1){
                 maxArea1 = contourArea1;
                 maxAreaIndex1 = i;
             }
             //maxArea1 = Imgproc.contourArea(contours.get(i));  //assigned but nvr used
             //max_contours = contours.get(i);
             r = Imgproc.boundingRect(contours.get(maxAreaIndex1));    
             rect_array.add(r);  //will only have 1 r in the array eventually, cos we will only take the one w largestContourArea.
     }


     Imgproc.cvtColor(InputFrame, InputFrame, Imgproc.COLOR_HSV2RGB);


 //============ plot largest blob contour ================
     if (rect_array.size() > 0) {   //if got more than 1 rect found in rect_array, draw them out!

         Iterator<Rect> it2 = rect_array.iterator();    //only got 1 though, this method much faster than drawContour, wont lag. =D
         while (it2.hasNext()) {
             Rect obj = it2.next();
             //if
             Imgproc.rectangle(InputFrame, obj.br(), obj.tl(),
                 new Scalar(0, 255, 0), 1);
         }

     }

Original Yellow object 1

Object in HSV color space 2

After cvInrRange to yellow Color - returns Binary Threshold Mask 3

Edges returned after applying Canny Edge Detection 4

1 Answers1

0

I have tried both approaches, found that applying Canny() on threshold image helped to make the detection faster and more stable, hence I'm keeping that part in my code. My guess is that perhaps there are lesser points to compute after we apply Canny() and it also helps to make the edges more obvious, hence it becomes easier & faster to compute in findContour().