撸一串经典的detection tracking

来源:互联网 发布:淘宝客服工作描述 编辑:程序博客网 时间:2024/06/05 04:01


1.vision.CascadeObjectDetector System object   这个function该怎么理解?

detecter=vision.CascadeObjectDetector System object(model)   model 是一个可选项  'FrontalFaceCART', 'UpperBody'
and 'ProfileFace'. See the ClassificationModel property
description for a full list of available models.

2.利用跟踪的一些应用:



%% Detect a Face
% Create a cascade detector object.
faceDetector = vision.CascadeObjectDetector();


% Read a video frame and run the face detector.
videoFileReader = vision.VideoFileReader('0.avi');
videoFrame      = step(videoFileReader);
bbox            = step(faceDetector, videoFrame);


% Convert the first box to a polygon.
% This is needed to be able to visualize the rotation of the object.
x = bbox(1, 1); y = bbox(1, 2); w = bbox(1, 3); h = bbox(1, 4);
bboxPolygon = [x, y, x+w, y, x+w, y+h, x, y+h];


% Draw the returned bounding box around the detected face.
videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon);


figure(9); imshow(videoFrame); title('Detected face');


%%
% To track the face over time, this example uses the Kanade-Lucas-Tomasi
% (KLT) algorithm. While it is possible to use the cascade object detector
% on every frame, it is computationally expensive. It may also fail to
% detect the face, when the subject turns or tilts his head. This
% limitation comes from the type of trained classification model used for
% detection. The example detects the face only once, and then the KLT
% algorithm tracks the face across the video frames. 


%% Identify Facial Features To Track
% The KLT algorithm tracks a set of feature points across the video frames.
% Once the detection locates the face, the next step in the example
% identifies feature points that can be reliably tracked.  This example
% uses the standard, "good features to track" proposed by Shi and Tomasi. 


% Detect feature points in the face region.
points = detectMinEigenFeatures(rgb2gray(videoFrame), 'ROI', bbox);


% Display the detected points.
figure, imshow(videoFrame), hold on, title('Detected features');
plot(points);


%% Initialize a Tracker to Track the Points
% With the feature points identified, you can now use the
% |vision.PointTracker| System object to track them. For each point in the
% previous frame, the point tracker attempts to find the corresponding
% point in the current frame. Then the |estimateGeometricTransform|
% function is used to estimate the translation, rotation, and scale between
% the old points and the new points. This transformation is applied to the
% bounding box around the face.


% Create a point tracker and enable the bidirectional error constraint to
% make it more robust in the presence of noise and clutter.
pointTracker = vision.PointTracker('MaxBidirectionalError', 2);


% Initialize the tracker with the initial point locations and the initial
% video frame.
points = points.Location;
initialize(pointTracker, points, videoFrame);


%% Initialize a Video Player to Display the Results
% Create a video player object for displaying video frames.
videoPlayer  = vision.VideoPlayer('Position',...
    [100 100 [size(videoFrame, 2), size(videoFrame, 1)]+30]);


%% Track the Face
% Track the points from frame to frame, and use
% |estimateGeometricTransform| function to estimate the motion of the face.


% Make a copy of the points to be used for computing the geometric
% transformation between the points in the previous and the current frames
oldPoints = points;


while ~isDone(videoFileReader)
    % get the next frame
    videoFrame = step(videoFileReader);


    % Track the points. Note that some points may be lost.
    [points, isFound] = step(pointTracker, videoFrame);
    visiblePoints = points(isFound, :);
    oldInliers = oldPoints(isFound, :);
    
    if size(visiblePoints, 1) >= 2 % need at least 2 points
        
        % Estimate the geometric transformation between the old points
        % and the new points and eliminate outliers
        [xform, oldInliers, visiblePoints] = estimateGeometricTransform(...
            oldInliers, visiblePoints, 'similarity', 'MaxDistance', 4);
        
        % Apply the transformation to the bounding box
        [bboxPolygon(1:2:end), bboxPolygon(2:2:end)] ...
            = transformPointsForward(xform, bboxPolygon(1:2:end), bboxPolygon(2:2:end));
        
        % Insert a bounding box around the object being tracked
        videoFrame = insertShape(videoFrame, 'Polygon', bboxPolygon);
                %这个bboxPolygon 包含了选框四个角的坐标;依靠这些坐标可以简单的取一些ROI;
        % Display tracked points
        videoFrame = insertMarker(videoFrame, visiblePoints, '+', ...
            'Color', 'white');       
        
        % Reset the points
        oldPoints = visiblePoints;
        setPoints(pointTracker, oldPoints);        
    end
    
    % Display the annotated video frame using the video player object
    step(videoPlayer, videoFrame);
end


% Clean up
release(videoFileReader);
release(videoPlayer);
release(pointTracker);
displayEndOfDemoMessage(mfilename)

0 0
原创粉丝点击