PETS-ICVS Datasets 数据集
来源:互联网 发布:大数据的教育弊端 编辑:程序博客网 时间:2024/05/22 14:41
Warning: you are strongly advised to view the smart meeting specification file available here before downloading any data. This will allow you to determine which part of the data is most appropriate for you. The total size of the dataset is 5.9 Gb.
The JPEG images for the PETS-ICVS may be obtained from here
You can also download all files under one directory using wget.
Please see http://www.gnu.org/software/wget/wget.html for more details.
Note: there appears to be some problems accessing the ftp site using Netscape. If you are having problems, please try using Internet Explorer instead, or access via direct ftp as shown above.
Important instructions are given at the bottom of this page on processing the datasets - please read these carefully.
Annotation of Datasets (Ground Truth)
The following annotations are available for the datasets:
1. Eye positions of people in Scenarios A, B and D. The format is described in the specification file link above, i.e.
image0001.jpg 3 left_eye_center_x left_eye_center_y right_eye_center_x right_eye_center_y
(where left and right are as seen by the camera, rather than the persons left/right eyes).
Image coordinates: the origin is in the top left.
Every 10th frame is annotated.
The annotation is available here.
2. Facial expression and gaze estimation for Scenarios A and D, Cameras 1-2.
The annotation is available here.
3. Gesture/action annotations for Scenarios B and D, Cameras 1-2.
The annotation is available here.
You are strongly encouraged to evaluate your results against the appropiate data above and report in your paper submission.
PETS-ICVS consists of datasets for a smart meeting.
Two views of the smart meeting room without participants. The environment consists of three cameras: one mounted on each of two opposing walls, and an omnidirectional camera positioned at the centre of the room.
View from Camera 1. Click on the background image for the full resolution version.
View from Camera 2. Click on the background image for the full resolution version.
View from Camera 3. Click on the background image for the full resolution version.
The measurements for the smart meeting room may be found in the following calibration file (Powerpoint).
The Task
The overall task is to automatically annotate the smart meeting.
The dataset consists of four scenarios A, B, C and D.
Each scenario consists of a number of separate sub-tasks. For each frame, the requirement is to perform:
- face localisation (centre location of eyes)
- recognition of facial expression
- recognition of face/hand gesture
- estimation of face/head direction (gaze)
- recognition of actions
A full specification of the dataset is available here including details of scenarios A-D, list of actions/gestures and facial expressions.
The results in your paper can be based on any of the data supplied in the dataset.
The images may be converted to any other format as appropriate, e.g. subsampled, or converted to monochrome. All results reported in the paper should clearly indicate which part of the test data is used, ideally with reference to frame numbers where appropriate, e.g. Scenario B, ...
There is no requirement to report results on all the test data, however you are encouraged to test your algorithms on as much of the test data as possible.
The results must be submitted along with the paper, with the results generated in XML format.
The paper that you submit may be based on previously published tracking methods/algorithms (including papers submitted to the main ICVS conference). The importance is that your paper MUST report results using the datasets above.
You are strongly encouraged to evaluate your results against the ground truth given above and report in your paper submission.
Acknowledgements
The sequences have been provided by the consortium of Project FGnet (IST-2000-26434) http://www.fg-net.org
with additional support provided by the Swiss National Centre of Competence in Research (NCCR) on Interactive MultiModal Information Management (IM)2. The NCCR is managed by the Swiss National Science Foundation on behalf of the Federal Authorities.
If you have any queries please email pets-icvs@visualsurveillance.org.
from: http://www-prima.inrialpes.fr/FGnet/data/08-Pets2003/pets-icvs-db.html
- PETS-ICVS Datasets 数据集
- 数据集 DATASETS
- 计算机视觉著名数据集CV Datasets
- 深度学习数据集Deep Learning Datasets
- 计算机视觉著名数据集CV Datasets
- sklearn的数据集模块datasets
- 5 sklearn的数据集-datasets
- 计算机视觉著名数据集CV Datasets
- PETS
- 【分享】Community Question Answering Datasets(社区问答数据集)
- 公开的海量数据集 Public Research-Quality Datasets
- 公开的海量数据集 Public Research-Quality Datasets
- 结合MapReduce和数据集Combining datasets with MapReduce
- 公开的海量数据集 Public Research-Quality Datasets
- 公开的海量数据集 Public Research-Quality Datasets
- 公开的海量数据集 Public Research-Quality Datasets
- 使用 matlab 数据集的生成(generate datasets)
- RDD(Resilient Distributed Datasets 弹性分布式数据集)
- Hadoop整合Hive之API封装及操作
- Mysql 日志
- Hibernate(三)
- 145. Binary Tree Postorder Traversal
- Mysql 性能优化
- PETS-ICVS Datasets 数据集
- UITabelView分组样式时如何调整组之间的间距
- 211. Add and Search Word - Data structure design【M】【91】
- Learning OpenCV: read video and add onTrackSlider
- 关于图像对比度【5】——局部自适应对比度调整
- hibernate处理懒加载异常的方法
- Codeforces Round #317 [AimFund Thanks-Round] (Div. 2)A Arrays
- 人脸和手势识别数据集 FGnet - IST-2000-26434 Face and Gesture Recognition Working group
- Mysql 用户管理