姿态估计 Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations

来源:互联网 发布:辅助官网源码 编辑:程序博客网 时间:2024/06/11 15:15

Abstract

We present a method for estimating articulated human pose from a single static image based on a graphical model with novel pairwise relations that make adaptive use of local image measurements. More precisely, we specify a graphical model for human pose which exploits the fact the local image measurements can be used both to detect parts (or joints) and also to predict the spatial relationships between them (Image Dependent Pairwise Relations). These spatial relationships are represented by a mixture model. We use Deep Convolutional Neural Networks (DCNNs) to learn conditional probabilities for the presence of parts and their spatial relationships within image patches. Hence our model combines the representational flexibility of graphical models with the efficiency and statistical power of DCNNs. Our method significantly outperforms the state of the art methods on the LSP and FLIC datasets and also performs very well on the Buffy dataset without any training.

NIPS 14 Poster 
Poster
NIPS 14 Paper PDF 
PDF
Estimation Results 
Results & Evaluation Code
Full Code 
Full Code
Trained Model 
Trained Model

@InProceedings{Chen_NIPS14,  title        = {Articulated Pose Estimation by a Graphical Model with Image Dependent Pairwise Relations},  author       = {Xianjie Chen and Alan Yuille},  booktitle    = {Advances in Neural Information Processing Systems (NIPS)},  year         = {2014},}


Key Ideas

1. Intuition: We can reliably predict the relative positions of a part's neighbors (as well as the presence of the part itself) by only observing the local image patch around it.Motivation
2. Deep Convolutional Neural Network is suitable to extract information about pairwise part relations, as well as part presence, from local image patches, which can be used in the unary and pairwise terms of the Graphical Model.Deep Convolutional Neural Network

Estimation Examples

Pose Estimation Examples

Performance

Comparison of strict PCP results on the Leeds Sport Pose (LSP) Dataset using Observer-Centric (OC) annotations.MethodTorsoHeadUpper ArmsLower ArmsUpper LegsLower LegsMeanOurs92.787.869.255.482.977.075.0Pishchulin et al., ICCV'1388.785.661.544.978.873.469.2Ouyang et al., CVPR'1485.883.163.346.676.572.268.6Ramakrishna et al., ECCV'1488.180.962.339.178.973.467.6Eichner&Ferrari, ACCV'1286.280.156.537.474.369.364.3Pishchulin et al., CVPR'1387.578.154.233.975.768.062.9Yang&Ramanan, CVPR'1184.177.152.535.969.565.660.8Kiefel&Gehler, ECCV'1484.478.453.327.474.467.160.7Numbers are from the corresponding papers or errata.
Comparison of strict PCP results on the Leeds Sport Pose (LSP) Dataset using Person-Centric (PC) annotations. Note that both our method and Tompson et al., NIPS'14* include the Extended Leeds Sport Pose (ex_LSP) Dataset as training data.MethodTorsoHeadUpper ArmsLower ArmsUpper LegsLower LegsMeanOurs*96.085.669.758.177.272.273.6Tompson et al., NIPS'14*90.383.763.051.270.461.166.6Pishchulin et al., ICCV'1388.785.146.035.263.658.458.0Wang&Li, CVPR'1387.579.143.132.156.055.854.1Numbers are from the performance evaluation by Pishchulin et al.
Comparison of strict PCP results on the Frames Labeled In Cinema (FLIC) Dataset using Observer-Centric (OC) annotations.MethodUpper ArmsLower ArmsMeanOurs97.086.891.9Tompson et al., NIPS'1493.780.987.3MODEC, CVPR'1384.452.168.3Numbers are from our evaluation using the prediction results released by the authors.
Comparison of PDJ curves of elbows and wrists on the Frames Labeled In Cinema (FLIC) Datasetusing Observer-Centric (OC) annotations. The curves are for Tompson et al., NIPS'14, DeepPose, CVPR'14 and MODEC, CVPR'13.FLIC PDJ curves

Figure Data: flic_elbows.fig | flic_wrists.fig


Cross-dataset PCP results on the Buffy Stickmen Dataset using Observer-Centric (OC) annotations.MethodUpper ArmsLower ArmsMeanOurs*96.889.092.9Ours* strict94.584.189.3Yang, PAMI'1397.868.683.2Yang, PAMI'13 strict94.357.575.9Sapp, ECCV'1095.363.079.2FLPM, ECCV'1293.260.676.9Eichner, IJCV'1293.260.376.8Numbers are from the corresponding papers.
Cross-dataset PDJ curves of elbows and wrists on the Buffy Stickmen Dataset using Observer-Centric (OC) annotations. Note that both our method and DeepPose are trained on the FLIC dataset. Compared with the curves on the FLIC dataset, the margin between our method and DeepPose significantly increases, which implies that our model generalizes better.Buffy PDJ curves

Figure Data: cross_dataset_buffy_elbows.fig | cross_dataset_buffy_elbows.fig


Related Pages

Nice Performance Evaluation by Pishchulin et al.

Buffy Stickmen Dataset (Buffy)

Leeds Sports Pose Dataset (LSP)

Extended Leeds Sports Pose Dataset (ex_LSP)

Frames Labeled In Cinema Dataset (FLIC)



from: http://www.stat.ucla.edu/~xianjie.chen/projects/pose_estimation/pose_estimation.html
0 0
原创粉丝点击