face++人脸识别
来源:互联网 发布:网站建设和优化排名 编辑:程序博客网 时间:2024/05/06 23:21
简介: Face++是北京旷视科技有限公司旗下的人脸识别云服务平台,Face++平台通过提供云端API、离线SDK、以及面向用户的自主研发产品等形式,将人脸识别技术广泛应用到互联网及移动应用场景中。Face++为广大开发者提供了简单易用的API,开发者可以轻松搭建属于自己的云端身份认证、用户兴趣挖掘、移动体感交互、社交娱乐分享等多种类型的应用。
Face++提供的技术服务包括人脸检测、人脸分析和人脸识别,主要说明如下:
1)人脸检测:可以从图片中快速、准确的定位面部的关键区域位置,包括眉毛、眼睛、鼻子、嘴巴等。2)人脸分析:可以从图片或实时视频流中分析出人脸的性别(准确度达96%)、年龄、种族等多种属性。
3)人脸识别:可以快速判定两张照片是否为同一个人,或者快速判定视频中的人像是否为某一位特定的人。
现面是一个简单的人脸识别demo:
人脸显示区:
_pickerImage = [[UIImagePickerController alloc] init];//初始化 UIView * aview = [[UIView alloc] initWithFrame:CGRectMake(0, 120, self.view.bounds.size.width, self.view.bounds.size.height-120)]; [self.view addSubview:aview];
人脸轮廓分析:
- (UIImage *)fixOrientation:(UIImage *)aImage { // No-op if the orientation is already correct if (aImage.imageOrientation == UIImageOrientationUp) return aImage; // We need to calculate the proper transformation to make the image upright. // We do it in 2 steps: Rotate if Left/Right/Down, and then flip if Mirrored. CGAffineTransform transform = CGAffineTransformIdentity; switch (aImage.imageOrientation) { case UIImageOrientationDown: case UIImageOrientationDownMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.width, aImage.size.height); transform = CGAffineTransformRotate(transform, M_PI); break; case UIImageOrientationLeft: case UIImageOrientationLeftMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.width, 0); transform = CGAffineTransformRotate(transform, M_PI_2); break; case UIImageOrientationRight: case UIImageOrientationRightMirrored: transform = CGAffineTransformTranslate(transform, 0, aImage.size.height); transform = CGAffineTransformRotate(transform, -M_PI_2); break; default: break; } switch (aImage.imageOrientation) { case UIImageOrientationUpMirrored: case UIImageOrientationDownMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.width, 0); transform = CGAffineTransformScale(transform, -1, 1); break; case UIImageOrientationLeftMirrored: case UIImageOrientationRightMirrored: transform = CGAffineTransformTranslate(transform, aImage.size.height, 0); transform = CGAffineTransformScale(transform, -1, 1); break; default: break; } // Now we draw the underlying CGImage into a new context, applying the transform // calculated above. CGContextRef ctx = CGBitmapContextCreate(NULL, aImage.size.width, aImage.size.height, CGImageGetBitsPerComponent(aImage.CGImage), 0, CGImageGetColorSpace(aImage.CGImage), CGImageGetBitmapInfo(aImage.CGImage)); CGContextConcatCTM(ctx, transform); switch (aImage.imageOrientation) { case UIImageOrientationLeft: case UIImageOrientationLeftMirrored: case UIImageOrientationRight: case UIImageOrientationRightMirrored: CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.height,aImage.size.width), aImage.CGImage); break; default: CGContextDrawImage(ctx, CGRectMake(0,0,aImage.size.width,aImage.size.height), aImage.CGImage); break; } // And now we just create a new UIImage from the drawing context CGImageRef cgimg = CGBitmapContextCreateImage(ctx); UIImage *img = [UIImage imageWithCGImage:cgimg]; CGContextRelease(ctx); CGImageRelease(cgimg); return img;}获取面部轮廓:
-(void) detectWithImage: (UIImage*) image { // NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; /* * mode : 检测模式可以是normal(默认) 或者 oneface 。在oneface模式中,检测器仅找出图片中最大的一张脸。 * * attribute:可以是none或者由逗号分割的属性列表。默认为gender, age, race, smiling。目前支持的属性包括:gender, age, race, smiling, glass, pose */ //脸部姿势分析结果中包括 :pitch_angle, roll_angle, yaw_angle,分别对应抬头,旋转(平面旋转),摇头。单位为角度。 FaceppResult *result = [[FaceppAPI detection] detectWithURL:nil orImageData:UIImageJPEGRepresentation(image, 1) mode:FaceppDetectionModeOneFace attribute:FaceppDetectionAttributeAll]; if (result.success) { double image_width = [[result content][@"img_width"] doubleValue] *0.01f; double image_height = [[result content][@"img_height"] doubleValue] * 0.01f; UIGraphicsBeginImageContext(image.size); [image drawAtPoint:CGPointZero]; CGContextRef context = UIGraphicsGetCurrentContext(); CGContextSetRGBFillColor(context, 0, 0, 1.0, 1.0); CGContextSetLineWidth(context, image_width * 0.7f); // draw rectangle in the image int face_count = [[result content][@"face"] count]; for (int i=0; i<face_count; i++) { double width = [[result content][@"face"][i][@"position"][@"width"] doubleValue]; double height = [[result content][@"face"][i][@"position"][@"height"] doubleValue]; CGRect rect = CGRectMake(([[result content][@"face"][i][@"position"][@"center"][@"x"] doubleValue] - width/2) * image_width, ([[result content][@"face"][i][@"position"][@"center"][@"y"] doubleValue] - height/2) * image_height, width * image_width, height * image_height); CGContextStrokeRect(context, rect); } UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); float scale = 4.0f; scale = MIN(scale, 280.0f/newImage.size.width); scale = MIN(scale, 257.0f/newImage.size.height); /* * 图片等比例 自适应 */// [_imageView setFrame:CGRectMake(_imageView.frame.origin.x,// _imageView.frame.origin.y,// newImage.size.width * scale,// newImage.size.height * scale)]; [_imageView setImage:newImage]; //面部属性分析 NSDictionary * dic = [NSDictionary dictionaryWithDictionary:[result content]]; [self face:dic]; } else { // some errors occurred UIAlertView *alert = [[UIAlertView alloc] initWithTitle:[NSString stringWithFormat:@"error message: %@", [result error].message] message:@"" delegate:nil cancelButtonTitle:@"OK!" otherButtonTitles:nil]; [alert performSelectorOnMainThread:@selector(show) withObject:nil waitUntilDone:YES]; } }
调用相机:
- (IBAction)camera:(id)sender { if ([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypeCamera]) { _pickerImage.delegate = self; _pickerImage.sourceType = UIImagePickerControllerSourceTypeCamera; [self presentViewController:_pickerImage animated:YES completion:nil]; } else { UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"没有相机" message:@"" delegate:nil cancelButtonTitle:@"OK!" otherButtonTitles:nil]; [alert performSelectorOnMainThread:@selector(show) withObject:nil waitUntilDone:YES]; } }
调用相册:
- (IBAction)photot:(id)sender { if([UIImagePickerController isSourceTypeAvailable:UIImagePickerControllerSourceTypePhotoLibrary]) { _pickerImage.sourceType = UIImagePickerControllerSourceTypePhotoLibrary; _pickerImage.mediaTypes = [UIImagePickerController availableMediaTypesForSourceType:_pickerImage.sourceType]; } _pickerImage.delegate = self; _pickerImage.allowsEditing = NO; [self presentViewController:_pickerImage animated:YES completion:nil]; }
点击相册中的图片 或者照相机照完后点击use 后触发的方法:
- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info{ [HUD1 show:YES]; [self.age setText:@"年龄:"]; [self.sex setText:@"性别:"]; [self.fuse setText:@"肤色:"]; [self.weixiao setText:@"微笑程度:"]; self.glass.text = nil; UIImage *sourceImage = info[UIImagePickerControllerOriginalImage]; UIImage *imageToDisplay = [self fixOrientation:sourceImage];#warning new// [self performSelectorInBackground:@selector(detectWithImage:) withObject:[imageToDisplay retain]]; [self performSelectorInBackground:@selector(detectWithImage:) withObject:imageToDisplay ]; [_imageView setImage:sourceImage]; [picker dismissViewControllerAnimated:YES completion:nil]; }
点击cancel 调用的方法:
- (void)imagePickerControllerDidCancel:(UIImagePickerController *)picker{
[self.age setText:@"年龄:"];
[self.sex setText:@"性别:"];
[self.fuse setText:@"肤色:"];
[self.weixiao setText:@"微笑程度:"];
self.glass.text = nil;
[picker dismissViewControllerAnimated:YES completion:nil];
}
-(void)face:(NSDictionary *)dic{ NSArray * arr = [dic objectForKey:@"face"]; NSDictionary * dic1 = [arr objectAtIndex:0]; NSDictionary * dic2 = [dic1 objectForKey:@"attribute"]; //年龄 NSDictionary * dic3 = [dic2 objectForKey:@"age"]; NSString * ageValue = [dic3 objectForKey:@"value"]; NSString * ageRange = [dic3 objectForKey:@"range"]; //性别 NSDictionary * dic4 = [dic2 objectForKey:@"gender"]; NSString * sex = [dic4 objectForKey:@"value"]; //肤色 NSDictionary * dic5 = [dic2 objectForKey:@"race"]; NSString * renzhong = [dic5 objectForKey:@"value"]; //微笑 NSDictionary * dic6 = [dic2 objectForKey:@"smiling"]; NSString * weixiao = [dic6 objectForKey:@"value"]; //眼镜 NSDictionary * dic7 = [dic2 objectForKey:@"glass"]; NSString * glass = [dic7 objectForKey:@"value"]; [self.age setText:[NSString stringWithFormat:@"年龄:%@(+-%@)",ageValue,ageRange]]; if ([sex isEqualToString:@"Male"]){ [self.sex setText:@"性别:男"]; }else{ [self.sex setText:@"性别:女"]; } if ([renzhong isEqualToString:@"Asian"]) { self.fuse.text = @"肤色:黄种人"; }else if ([renzhong isEqualToString:@"White"]){ self.fuse.text = @"肤色:白种人"; }else{ self.fuse.text = @"肤色:黑种人"; } [self.weixiao setText:[NSString stringWithFormat:@"微笑程度:%@",weixiao]]; if ([glass isEqualToString:@"Normal"]) { if ([sex isEqualToString:@"Male"]) { self.glass.text = @"文艺眼镜男"; }else{ self.glass.text = @"文艺眼镜女"; } } }
- (IBAction)shuxing:(id)sender { FaceppResult * result = [[FaceppAPI detection] detectWithURL:nil orImageData:UIImageJPEGRepresentation(_imageView.image, 1) mode:FaceppDetectionModeNormal attribute:FaceppDetectionAttributeAll tag:nil async:NO]; if ([result success]) { NSDictionary * dic = [NSDictionary dictionaryWithDictionary:[result content]]; NSArray * arr = [dic objectForKey:@"face"]; NSDictionary * dic1 = [arr objectAtIndex:0]; NSDictionary * dic2 = [dic1 objectForKey:@"attribute"]; //年龄 NSDictionary * dic3 = [dic2 objectForKey:@"age"]; NSString * ageValue = [dic3 objectForKey:@"value"]; NSString * ageRange = [dic3 objectForKey:@"range"]; //性别 NSDictionary * dic4 = [dic2 objectForKey:@"gender"]; NSString * sex = [dic4 objectForKey:@"value"]; //肤色 NSDictionary * dic5 = [dic2 objectForKey:@"race"]; NSString * renzhong = [dic5 objectForKey:@"value"]; //微笑 NSDictionary * dic6 = [dic2 objectForKey:@"smiling"]; NSString * weixiao = [dic6 objectForKey:@"value"]; [self.age setText:[NSString stringWithFormat:@"年龄:%@(+-%@)",ageValue,ageRange]]; if ([sex isEqualToString:@"Male"]){ [self.sex setText:@"性别:男"]; }else{ [self.sex setText:@"性别:女"]; } if ([renzhong isEqualToString:@"Asian"]) { self.fuse.text = @"肤色:黄种人"; }else if ([renzhong isEqualToString:@"White"]){ self.fuse.text = @"肤色:白种人"; }else{ self.fuse.text = @"肤色:黑种人"; } [self.weixiao setText:[NSString stringWithFormat:@"微笑程度:%@",weixiao]]; } }
0 0
- face++人脸识别
- face++ 人脸识别
- Face++人脸识别
- Face++人脸识别
- face++实现人脸识别
- face++人脸识别源码
- 人脸识别应用face+
- face++实现人脸识别
- 人脸识别之Face++
- face ++ 实现人脸识别
- 人脸识别face recognition
- Face++ 人脸识别,身份识别集成
- Face++人脸识别:Learning Deep Face Representation
- face recognition matlab code 人脸识别
- face++人脸识别的java调用
- 人脸识别face++ SDK demo体验
- 人脸识别数据集 Face Databases
- 人脸识别(face recognition)
- 来自苹果的编程语言——Swift简介
- Android ShareSDK 分享经验
- 淡抹了年华
- matlab 多项式
- C++ Primer Plus 第六章
- face++人脸识别
- Android开发实现点击两次返回键退出程序
- ORACLE外连接
- 进入公司第二个月
- ubuntu桌面启动器
- Blogger支持Mobile行动版网页 - Blog透视镜
- Scala 类认识
- 魔方阵 幻方阵
- C++的性能优化实践