机器学习处理图片数据并识别图片情感
来源:互联网 发布:淘宝新规则2017及处罚 编辑:程序博客网 时间:2024/05/16 07:46
罗切斯特大学的研究者,通过训练使计算机能够消化图片数据,并识别图片背后隐藏的情感。他们认为这是图片分类问题。通过抓取海量数据,让计算机根据算法定义图片可能表达的情绪标签,之后依靠人类群体智慧帮助计算机识别其中最好的标签,最终,计算机逐步学习如果定义照片的情绪。
Log on to Twitter, Facebook or other social media and you will find that much of the content shared with you comes in the form of images, not just words. Those images can convey a lot more than a sentence might, and will often provoke emotions in the viewer.
Jiebo Luo, professor of computer science at the University of Rochester, in collaboration with researchers at Adobe Research has come up with a more accurate way than currently possible to train computers to be able to digest data that comes in the form of images.
In a paper presented last week at the American Association for Artificial Intelligence (AAAI) conference in Austin, Texas, they describe what they refer to as a progressive training deep convolutional neural network (CNN).
The trained computer can then be used to determine what sentiments these images are likely to elicit. Luo says that this information could be useful for things as diverse as measuring economic indicators or predicting elections.
Sentiment analysis of text by computers is itself a challenging task. And in social media, sentiment analysis is more complicated because many people express themselves using images and videos, which are more difficult for a computer to understand.
For example, during a political campaign voters will often share their views through pictures. Two different pictures might show the same candidate, but they might be making very different political statements. A human could recognize one as being a positive portrait of the candidate (e.g. the candidate smiling and raising his arms) and the other one being negative (e.g. a picture of the candidate looking defeated). But no human could look at every picture shared on social media – it is truly “big data.” To be able to make informed guesses about a candidate’s popularity, computers need to be trained to digest this data, which is what Luo and his collaborators’ approach can do more accurately than was possible until now.
The researchers treat the task of extracting sentiments from images as an image classification problem. This means that somehow each picture needs to be analyzed and labels applied to it.
To begin the training process, Luo and his collaborators used a huge number of Flickr images that have been loosely labeled by a machine algorithm with specific sentiments, in an existing database known as SentiBank (developed by Professor Shih-Fu Chang’s group at Columbia University). This gives the computer a starting point to begin understanding what some images can convey. But the machine-generated labels also include a likelihood of that label being true, that is, how sure is the computer that the label is correct? The key step of the training process comes next, when they discard any images for which the sentiment or sentiments with which they have been labeled might not be true. So they use only the “better” labeled images for further training in a progressively improving manner within the framework of the powerful convolutional neural network. They found that this extra step significantly improved the accuracy of the sentiments with which each picture is labeled.
They also adapted this sentiment analysis engine with some images extracted from Twitter. In this case they employed “crowd intelligence,” with multiple people helping to categorize the images via the Amazon Mechanical Turk platform. They used only a small number of images for fine-tuning the computer and yet, by applying this domain-adaptation process, they showed they could improve on current state of the art methods for sentiment analysis of Twitter images. One surprising finding is that the accuracy of image sentiment classification has exceeded that of the text sentiment classification on the same Twitter messages.
Luo’s co-authors on the paper, “Robust Image Sentiment Analysis using Progressively Trained and Domain Transferred Deep Networks,” are Quanzeng You, Hailin Jin, and Jianchao Yang. The paper was presented at the 29th AAAI Conference on Artificial Intelligence in Austin, Texas, from Jan. 25-30, 2015. The paper can be downloaded here: http://www.cs.rochester.edu/u/qyou/papers/sentiment_analysis_final.pdf.
- 机器学习处理图片数据并识别图片情感
- 图片情感识别
- Android图片处理:识别图像方向并显示
- Android图片处理:识别图像方向并显示
- Android图片处理:识别图像方向并显示
- Android图片处理:识别图像方向并显示
- 体验阿里云机器学习实现智能图片识别分类
- 训练数据识别小猫图片
- 机器为什么能识别MNIST图片?
- 图片情感分析(1):图像数据预处理
- Android处理图片透明度并绘画图片
- 学习笔记:yolo识别图片
- R语言:用微软的深度学习理解图片情感
- 正规表达式识别图片地址 并下载
- 关于实现接收base64图片数据并以图片保存到本地的处理
- Andrew NG 机器学习 笔记-week11-应用实例:图片文字识别(Application Example:Photo OCR)
- 图片识别
- 深度学习图片文字定位识别
- POJ 1947 Rebuilding Roads (树形dp 经典题)
- VPN
- sift笔记
- Java学习之任务调度--Timer
- 单例模式一(Singleton)
- 机器学习处理图片数据并识别图片情感
- C语言深入学习系列
- ecshop模板标签
- android LinearLayout代码方式实现Weight
- C,C++宏中#与##的讲解
- 本地化UIImagePicker中的文字
- hdu 5184 卡特兰数
- 对于获取浏览器页面大小的问题
- android中InputConnection详解