撸一串经典的detection tracking
来源:互联网 发布:淘宝客服工作描述 编辑:程序博客网 时间:2024/06/05 04:01
1.vision.CascadeObjectDetector System object 这个function该怎么理解?
detecter=vision.CascadeObjectDetector System object(model) model 是一个可选项 'FrontalFaceCART', 'UpperBody'
and 'ProfileFace'. See the ClassificationModel property
description for a full list of available models.
2.利用跟踪的一些应用:
%% Detect a Face
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();
% Read a video frame and run the face detector.
videoFileReader = vision.VideoFileReader('0.avi');
videoFrame = step(videoFileReader);
bbox = step(faceDetector, videoFrame);
% Convert the first box to a polygon.
% This is needed to be able to visualize the rotation of the object.
x = bbox(1, 1); y = bbox(1, 2); w = bbox(1, 3); h = bbox(1, 4);
bboxPolygon = [x, y, x+w, y, x+w, y+h, x, y+h];
% Draw the returned bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon);
figure(9); imshow(videoFrame); title('Detected face');
%%
% To track the face over time, this example uses the Kanade-Lucas-Tomasi
% (KLT) algorithm. While it is possible to use the cascade object detector
% on every frame, it is computationally expensive. It may also fail to
% detect the face, when the subject turns or tilts his head. This
% limitation comes from the type of trained classification model used for
% detection. The example detects the face only once, and then the KLT
% algorithm tracks the face across the video frames.
%% Identify Facial Features To Track
% The KLT algorithm tracks a set of feature points across the video frames.
% Once the detection locates the face, the next step in the example
% identifies feature points that can be reliably tracked. This example
% uses the standard, "good features to track" proposed by Shi and Tomasi.
% Detect feature points in the face region.
points = detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI', bbox);
% Display the detected points.
figure, imshow(videoFrame), hold on, title('Detected features');
plot(points);
%% Initialize a Tracker to Track the Points
% With the feature points identified, you can now use the
% |vision.PointTracker| System object to track them. For each point in the
% previous frame, the point tracker attempts to find the corresponding
% point in the current frame. Then the |estimateGeometricTransform|
% function is used to estimate the translation, rotation, and scale between
% the old points and the new points. This transformation is applied to the
% bounding box around the face.
% Create a point tracker and enable the bidirectional error constraint to
% make it more robust in the presence of noise and clutter.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);
% Initialize the tracker with the initial point locations and the initial
% video frame.
points = points.Location;
initialize(pointTracker, points, videoFrame);
%% Initialize a Video Player to Display the Results
% Create a video player object for displaying video frames.
videoPlayer = vision.VideoPlayer('Position',...
[100 100 [size(videoFrame, 2), size(videoFrame, 1)]+30]);
%% Track the Face
% Track the points from frame to frame, and use
% |estimateGeometricTransform| function to estimate the motion of the face.
% Make a copy of the points to be used for computing the geometric
% transformation between the points in the previous and the current frames
oldPoints = points;
while ~isDone(videoFileReader)
% get the next frame
videoFrame = step(videoFileReader);
% Track the points. Note that some points may be lost.
[points, isFound] = step(pointTracker, videoFrame);
visiblePoints = points(isFound, :);
oldInliers = oldPoints(isFound, :);
if size(visiblePoints, 1) >= 2 % need at least 2 points
% Estimate the geometric transformation between the old points
% and the new points and eliminate outliers
[xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
% Apply the transformation to the bounding box
[bboxPolygon(1:2:end), bboxPolygon(2:2:end)] ...
= transformPointsForward(xform, bboxPolygon(1:2:end), bboxPolygon(2:2:end));
% Insert a bounding box around the object being tracked
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon);
%这个bboxPolygon 包含了选框四个角的坐标;依靠这些坐标可以简单的取一些ROI;
% Display tracked points
videoFrame = insertMarker(videoFrame, visiblePoints, '+', ...
'Color', 'white');
% Reset the points
oldPoints = visiblePoints;
setPoints(pointTracker, oldPoints);
end
% Display the annotated video frame using the video player object
step(videoPlayer, videoFrame);
end
% Clean up
release(videoFileReader);
release(videoPlayer);
release(pointTracker);
displayEndOfDemoMessage(mfilename)
- 撸一串经典的detection tracking
- TLD(Tracking-Learning-Detection)的理解
- Tracking-Learning-Detection
- Tracking-Learning-Detection
- TRACKING-BY-DETECTION in MATLAB windows下的程序调试
- Tracking-Learning-Detection NB啊
- Tracking-Learning-Detection原理分析
- Tracking-Learning-Detection原理分析
- Tracking-Learning-Detection原理分析
- Tracking-Learning-Detection原理分析
- TLD(Tracking-Learning-Detection) 编译
- Tracking-Learning-Detection原理分析
- Tracking-Learning-Detection原理分析
- face detection, eye detection,blink detection, and color tracking
- Tracking-Learning-Detection TLD解析三 - Learning学习(跟踪与检测的协调与更新)
- 跑Tracking-Learning-Detection (TLD)是遇到的问题及解决方法
- Tracking-Learning-Detection TLD解析一 - 前言
- TLD tracking learning detection 学习资料
- 桥接模式
- 整数划分
- 【Codeforces Round 363 (Div 2) C】【简单DP】Vacations 一天运动 一天学习最少休息日数
- ionic错误集锦(更新2016-07-26)
- hibernate 的createSQLQuery的几种用法(转)
- 撸一串经典的detection tracking
- .Net AOP(二)远程代理Remoting/RealProxy
- jQuery判断数组中是否包含某个元素$.inArray("js", arr);
- 登录功能测试总结
- 三种存储类型比较-文件、块、对象存储
- PHP进阶(二)——类与对象
- .Net AOP(三)继承ContextBoundObject方式
- A simpleContent extension must define a base type
- 设置GP工具的环境变量