关于opencv2和3在图像特征识别的区别。
来源:互联网 发布:windows系统官网下载 编辑:程序博客网 时间:2024/05/24 00:56
转https://www.hongweipeng.com/index.php/archives/709/
1.opencv3并不自带sift,fast等,需要额外安装。
2.cv2.drawMatches
这个函数在OpenCV 2.4.12
中不存在。3.0以后才提供。所以运行时得到这样的报错。
函数原型如下:
cv2.drawMatches(img1, keypoints1, img2, keypoints2, matches1to2[, outImg[, matchColor[, singlePointColor[, matchesMask[, flags]]]]]) → outImg
img1 – 源图像1
keypoints1 –源图像1的特征点.
img2 – 源图像2.
keypoints2 – 源图像2的特征点
matches1to2 – 源图像1的特征点匹配源图像2的特征点[matches[i]] DMatch.
outImg – 输出图像具体由flags决定.
matchColor – 匹配的颜色(特征点和连线),若matchColor==Scalar::all(-1),颜色随机.
singlePointColor – 单个点的颜色,即未配对的特征点,若matchColor==Scalar::all(-1),颜色随机.
matchesMask – Mask决定哪些点将被画出,若为空,则画出所有匹配点.
flags – Fdefined by DrawMatchesFlags.
实现
自给自足,丰衣足食。
def drawMatches(img1, kp1, img2, kp2, matches): """ My own implementation of cv2.drawMatches as OpenCV 2.4.9 does not have this function available but it's supported in OpenCV 3.0.0 This function takes in two images with their associated keypoints, as well as a list of DMatch data structure (matches) that contains which keypoints matched in which images. An image will be produced where a montage is shown with the first image followed by the second image beside it. Keypoints are delineated with circles, while lines are connected between matching keypoints. img1,img2 - Grayscale images kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint detection algorithms matches - A list of matches of corresponding keypoints through any OpenCV keypoint matching algorithm """ # Create a new output image that concatenates the two images together # (a.k.a) a montage rows1 = img1.shape[0] cols1 = img1.shape[1] rows2 = img2.shape[0] cols2 = img2.shape[1] out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8') # Place the first image to the left out[:rows1, :cols1] = np.dstack([img1]) # Place the next image to the right of it out[:rows2, cols1:] = np.dstack([img2]) # For each pair of points we have between both images # draw circles, then connect a line between them for mat in matches: # Get the matching keypoints for each of the images img1_idx = mat.queryIdx img2_idx = mat.trainIdx # x - columns # y - rows (x1,y1) = kp1[img1_idx].pt (x2,y2) = kp2[img2_idx].pt # Draw a small circle at both co-ordinates # radius 4 # colour blue # thickness = 1 cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1) cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1) # Draw a line in between the two points # thickness = 1 # colour blue cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1) # Show the image # cv2.imshow('Matched Features', out) # cv2.waitKey(0) # cv2.destroyWindow('Matched Features') # Also return the image if you'd like a copy return out
测试
# -*- coding: utf-8 -*-import cv2import numpy as npimg1 = cv2.imread('static/images/1a.jpg',cv2.IMREAD_COLOR)img2 = cv2.imread('static/images/1b.jpg',cv2.IMREAD_COLOR)# img1 = cv2.resize(img1, (256, 256))# img2 = cv2.resize(img2, (256, 256))gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)#SIFTdetector = cv2.SIFT()# 特征点集keypoints1 = detector.detect(gray1, None)keypoints2 = detector.detect(gray2, None)outimg1 = cv2.drawKeypoints(gray1, keypoints1)outimg2 = cv2.drawKeypoints(gray2, keypoints2)cv2.imshow('img1', outimg1)cv2.imshow('img2', outimg2)# kp,des = sift.compute(gray,kp)kp1, des1 = detector.compute(gray1, keypoints1)kp2, des2 = detector.compute(gray2, keypoints2)# 定义一个burte force matcher对象matcher = cv2.BFMatcher()matches = matcher.match(des1, des2)matches = sorted(matches, key = lambda x:x.distance)end_img = drawMatches(img1, kp1, img2, kp2, matches[:30])cv2.imshow('end_img', end_img)cv2.waitKey(0)cv2.destroyAllWindows()
# -*- coding: utf-8 -*-import cv2import numpy as npimg1 = cv2.imread('static/images/1a.jpg',cv2.IMREAD_COLOR)img2 = cv2.imread('static/images/1b.jpg',cv2.IMREAD_COLOR)# img1 = cv2.resize(img1, (256, 256))# img2 = cv2.resize(img2, (256, 256))gray1 = cv2.cvtColor(img1, cv2.COLOR_BGR2GRAY)gray2 = cv2.cvtColor(img2, cv2.COLOR_BGR2GRAY)#SIFTdetector = cv2.SIFT()# 特征点集keypoints1 = detector.detect(gray1, None)keypoints2 = detector.detect(gray2, None)outimg1 = cv2.drawKeypoints(gray1, keypoints1)outimg2 = cv2.drawKeypoints(gray2, keypoints2)cv2.imshow('img1', outimg1)cv2.imshow('img2', outimg2)# kp,des = sift.compute(gray,kp)kp1, des1 = detector.compute(gray1, keypoints1)kp2, des2 = detector.compute(gray2, keypoints2)# 定义一个burte force matcher对象matcher = cv2.BFMatcher()matches = matcher.match(des1, des2)matches = sorted(matches, key = lambda x:x.distance)end_img = drawMatches(img1, kp1, img2, kp2, matches[:30])cv2.imshow('end_img', end_img)cv2.waitKey(0)cv2.destroyAllWindows()
0 0
- 关于opencv2和3在图像特征识别的区别。
- opencv2.4.9图像特征点的提取和匹配
- Opencv2.3 图像特征检测总结
- Opencv2.3 图像特征检测总结
- 利用opencv2.4.10+VS2012和RobHess的sift特征算法实现全景图像拼接
- 基于颜色特征的图像识别
- 图像识别库和生物特征识别库
- 关于图像的特征提取
- estimateRigidTransform函数在opencv3和opencv2的区别
- BackgroundSubtractorMOG2在opencv2和opencv3的使用区别
- 关于图像处理的特征检测、特征提取和匹配的理解
- opencv2图像的腐蚀和膨胀运算
- 图像识别和跟踪中常用特征点
- 【图像识别】【读论文】纸币图像特征提取和识别问题
- opencv2提取图像LBP特征代码详解
- 图像识别之HOG特征
- 图像识别中的矩特征
- 关于图像特征提取的综述
- SQL 基础--> 视图(CREATE VIEW)
- DVWA-1.9全级别教程之File Upload
- 前言
- 简单介绍下MYSQL的索引类型
- 我的csdn博客积分清零了,博客等级也恢复成了1级,这是怎么回事?
- 关于opencv2和3在图像特征识别的区别。
- spring事务回滚问题
- ListView AsynTask异步加载网络Json格式数据和图片显示的优化
- java表单处理带文件的处理 对文件过滤处理
- Linux-文件默认权限与umask
- C# 根据网址下载网页
- mysql外网无法访问解决思路
- Cocos Creator中获取和加载资源(官方文档摘录)
- 大日志文件分割