关于opencv2和3在图像特征识别的区别。

来源:互联网 发布:windows系统官网下载 编辑:程序博客网 时间:2024/05/24 00:56

转https://www.hongweipeng.com/index.php/archives/709/

1.opencv3并不自带sift,fast等,需要额外安装。

2.cv2.drawMatches这个函数在OpenCV 2.4.12中不存在。3.0以后才提供。所以运行时得到这样的报错。

函数原型如下:

cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
  • img1 – 源图像1

  • keypoints1 –源图像1的特征点.

  • img2 – 源图像2.

  • keypoints2 – 源图像2的特征点

  • matches1to2 – 源图像1的特征点匹配源图像2的特征点[matches[i]] DMatch.

  • outImg – 输出图像具体由flags决定.

  • matchColor – 匹配的颜色(特征点和连线),若matchColor==Scalar::all(-1),颜色随机.

  • singlePointColor – 单个点的颜色,即未配对的特征点,若matchColor==Scalar::all(-1),颜色随机.

  • matchesMask – Mask决定哪些点将被画出,若为空,则画出所有匹配点.

  • flags – Fdefined by DrawMatchesFlags.

实现

自给自足,丰衣足食。

def drawMatches(img1, kp1, img2, kp2, matches):    """    My own implementation of cv2.drawMatches as OpenCV 2.4.9    does not have this function available but it's supported in    OpenCV 3.0.0    This function takes in two images with their associated    keypoints, as well as a list of DMatch data structure (matches)    that contains which keypoints matched in which images.    An image will be produced where a montage is shown with    the first image followed by the second image beside it.    Keypoints are delineated with circles, while lines are connected    between matching keypoints.    img1,img2 - Grayscale images    kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint              detection algorithms    matches - A list of matches of corresponding keypoints through any              OpenCV keypoint matching algorithm    """    # Create a new output image that concatenates the two images together    # (a.k.a) a montage    rows1 = img1.shape[0]    cols1 = img1.shape[1]    rows2 = img2.shape[0]    cols2 = img2.shape[1]    out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8')    # Place the first image to the left    out[:rows1, :cols1] = np.dstack([img1])    # Place the next image to the right of it    out[:rows2, cols1:] = np.dstack([img2])    # For each pair of points we have between both images    # draw circles, then connect a line between them    for mat in matches:        # Get the matching keypoints for each of the images        img1_idx = mat.queryIdx        img2_idx = mat.trainIdx        # x - columns        # y - rows        (x1,y1) = kp1[img1_idx].pt        (x2,y2) = kp2[img2_idx].pt        # Draw a small circle at both co-ordinates        # radius 4        # colour blue        # thickness = 1        cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1)        cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1)        # Draw a line in between the two points        # thickness = 1        # colour blue        cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1)    # Show the image    # cv2.imshow('Matched Features', out)    # cv2.waitKey(0)    # cv2.destroyWindow('Matched Features')    # Also return the image if you'd like a copy    return out

测试

# -*- coding: utf-8 -*-import cv2import numpy as npimg1 = cv2.imread('static/images/1a.jpg',cv2.IMREAD_COLOR)img2 = cv2.imread('static/images/1b.jpg',cv2.IMREAD_COLOR)# img1 = cv2.resize(img1, (256, 256))# img2 = cv2.resize(img2, (256, 256))gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)#SIFTdetector = cv2.SIFT()# 特征点集keypoints1 = detector.detect(gray1, None)keypoints2 = detector.detect(gray2, None)outimg1 = cv2.drawKeypoints(gray1, keypoints1)outimg2 = cv2.drawKeypoints(gray2, keypoints2)cv2.imshow('img1', outimg1)cv2.imshow('img2', outimg2)# kp,des = sift.compute(gray,kp)kp1, des1 = detector.compute(gray1, keypoints1)kp2, des2 = detector.compute(gray2, keypoints2)# 定义一个burte force matcher对象matcher = cv2.BFMatcher()matches = matcher.match(des1, des2)matches = sorted(matches, key = lambda x:x.distance)end_img = drawMatches(img1, kp1, img2, kp2, matches[:30])cv2.imshow('end_img', end_img)cv2.waitKey(0)cv2.destroyAllWindows()

0 0
原创粉丝点击