Sunday, July 27, 2014

example using matchShapes in OpenCV

#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <cmath>
#include <iostream>
#include <cstdio>
using namespace std;
using namespace cv;

int main(int argc, char*argv[])
{
     if(argc!=3)
    {
        cout<<"usage: ./ms main.png match.png"<<endl;
        return 1;
     }
    Mat src = imread(argv[1]);
    if (src.empty())
   {
      cout<<"figuire "<<argv[1]<< "is not located!"<<endl;
      return 1;
    }

    Mat match = imread(argv[2]);
     if (match.empty())
     {
          cout<<"figuire "<<argv[2]<< "is not located!"<<endl;
          return 1;
     }

      Mat srcGray;
      Mat matchGray;
      cvtColor(src, srcGray, CV_BGR2GRAY);
      cvtColor(match, matchGray, CV_BGR2GRAY);

      Mat src_th, match_th;
      threshold(srcGray, src_th,125, 255,THRESH_BINARY_INV);
      threshold(matchGray, match_th,125, 255,THRESH_BINARY_INV);

      vector<vector<Point> > src_contours;
      vector<Vec4i> src_hierarchy;
      findContours(src_th, src_contours, src_hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);


      vector<vector<Point> > match_contours;
      vector<Vec4i> match_hierarchy;
      findContours(match_th, match_contours, match_hierarchy,CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);

      double matchR;  

     for(int i=0;i<src_contours.size();++i)
    {
        matchR = matchShapes(approx, match_contours[1],CV_CONTOURS_MATCH_I1,0);
       cout<<"match resul of contour "<<i<<" is: "<<matchR<<endl;
     }
}

However, matchShapes function does not work very well.

We are using the 2nd figure to match any one in the 1st figure. Here is the result:

match resul of contour 0 is: 0.11592
match resul of contour 1 is: 0.151644
match resul of contour 2 is: 0.282304
match resul of contour 3 is: 0.390383
match resul of contour 4 is: 0.540419
match resul of contour 5 is: 0.757443

2 comments:

  1. Why do you think there are 5 contours for three image segments? That seems like the first barrier to me. The main image has three sub-images, and there should be three contours. This is likely because two of the images are thick and therefore create an inside and outside contour. If you add a Canny lines detection step after the thresholding, you will reduce the sub-images down to single pixel-wide representations. This should leave you with only three contours, which may help in the matching process.

    ReplyDelete
  2. Also, the matchShapes metric is based off of Hu Moments, which is agnostic to rotation and scaling, but NOT to stretching. The first sub-image is likely the match of 0.11592 and is the best match (lower value is better match). That sub-image looks most proportional to your main image. The other two are stretched or shrunk in the X-dimension and thus are not invariant in their Hu Moments, so they will have higher difference values.

    ReplyDelete