Removing stop words —— Python Data Science CookBook
来源:互联网 发布:算法统宗中的题目 编辑:程序博客网 时间:2024/06/08 19:10
In text processing, we are interested in words or phrases that will help us differentiate the given text from the other text in the corpus. Let’s call these words or phrases as key phrases. Every text mining application needs a way to find out the key phrases. An information retrieval application needs key phrases for the easy retrieval and ranking of search results. A text classification system needs key phrases as its features that are to be fed to a classifier. This is where stop words come into the picture. “Sometimes, some extremely common words which would appear to be of little value in helping select documents matching a user need are excluded from the vocabulary entirely. These words are called stop words.” Introduction to Information Retrieval By Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze.
Tip
Remember that stop word removal is contextual and based on the application. If you are working on a sentiment analysis application on mobile or chat room text, emoticons are highly useful. You don’t remove them as they form a very good feature set for the downstream machine learning application. Typically, in a document, the frequency of stop words is very high. However, there may be other words in your corpus that may have a very high frequency. Based on your context, you can add them to your stop word list.The Python NLTK library provides us with a default stop word corpus that we can leverage, as follows:
>>> from nltk.corpus import stopwords>>> stopwords.words('english')[u'i', u'me', u'my', u'myself', u'we', u'our', u'ours', u'ourselves',u'you', u'your', u'yours', u'yourself', u'yourselves', u'he', u'him',u'his', u'himself', u'she', u'her', u'hers', u'herself', u'it', u'its',u'itself', u'they', u'them', u'their', u'theirs', u'themselves', u'what',u'which', u'who', u'whom', u'this', u'that', u'these', u'those', u'am',u'is', u'are', u'was', u'were', u'be', u'been', u'being', u'have', u'has',u'had', u'having', u'do', u'does', u'did', u'doing', u'a', u'an', u'the',u'and', u'but', u'if', u'or', u'because', u'as', u'until', u'while', u'of',u'at', u'by', u'for', u'with', u'about', u'against', u'between', u'into',u'through', u'during', u'before', u'after', u'above', u'below', u'to',u'from', u'up', u'down', u'in', u'out', u'on', u'off', u'over', u'under',u'again', u'further', u'then', u'once', u'here', u'there', u'when',u'where', u'why', u'how', u'all', u'any', u'both', u'each', u'few',u'more', u'most', u'other', u'some', u'such', u'no', u'nor', u'not',u'only', u'own', u'same', u'so', u'than', u'too', u'very', u's', u't',u'can', u'will', u'just', u'don', u'should', u'now']example:
#!/usr/bin/env python2# -*- coding: utf-8 -*-"""@author: snaildove"""# Load librariesfrom nltk.corpus import stopwordsfrom nltk.tokenize import word_tokenizeimport stringtext = "Text mining, also referred to as text data mining, roughly equivalent to text analytics,refers to the process of deriving high-quality information from text. Highquality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Text mining usually involves the process of structuring the input text (usually parsing, along with the addition of some derived linguistic features and the removal of others, and subsequent insertion into a database), deriving patterns within the structured data, and finally evaluation and interpretation of the output. 'High quality' in text mining usually refers to some combination of relevance, novelty, and interestingness.Typical text mining tasks include text categorization, text clustering, concept/entity extraction, production of granular taxonomies, sentiment analysis, document summarization, and entity relation modeling (i.e., learning relations between named entities).Text analysis involves information retrieval, lexical analysis to study word frequency distributions, pattern recognition, tagging/annotation,information extraction, data mining techniques including link and association analysis, visualization, and predictive analytics. The overarching goal is,essentially, to turn text into data for analysis, via application of natural language processing (NLP) and analytical methods.A typical application is to scan a set of documents written in a natural language and either model the document set for predictive classification purposes or populate a database or search index with the information extracted."#Let’s now demonstrate the stop words removal process:#we will tokenize the input text into words using the word_tokenize function. The words is now a list of all the words tokenized from the input.words = word_tokenize(text)# 2.Let us get the list of stopwords from nltk stopwords english corpus.stop_words = stopwords.words('english')print "Number of words = %d"%(len(words))# 3. Filter out the stop words.words = [w for w in words if w not in stop_words]print "Number of words,without stop words = %d"%(len(words))#Here, we will run another list comprehension in order to remove punctuations from our words.words = [w for w in words if w not in string.punctuation]print "Number of words,without stop words and punctuations = %d"%(len(words))output :
Number of words = 257Number of words,without stop words = 193Number of words,without stop words and punctuations = 155
Tip
Remember that stop word removal is contextual and based on the application. If you are working on a sentiment analysis application on mobile or chat room text, emoticons are highly useful. You don’t remove them as they form a very good feature set for the downstream machine learning application. Typically, in a document, the frequency of stop words is very high. However, there may be other words in your corpus that may have a very high frequency. Based on your context, you can add them to your stop word list. 0 0
- Removing stop words —— Python Data Science CookBook
- Stemming the words and word lemmatization —— Python Data Science CookBook
- the bag of words representation —— Python Data Science CookBook
- data imputation —— Python Data Science Cookbook
- sampling brief —— python data science cookbook
- Using scatter plots for multivariate data —— python data science cookbook
- Performing summary statistics and plots —— Python Data Science Cookbook
- box-and-whisker plot —— Python Data Science Cookook
- PCA(Principal Component Analysis) —— Python Data Science Cookook
- Data Science in Python
- Python for data science
- class Python Data Science Toolbox
- Stop Words
- class Intermediate Python for Data Science
- Data Science完整学习路径Python版
- Python Cookbook 2——Python技巧
- Python GUI Cookbook —— 布局管理
- Python GUI Cookbook —— 定制 widgets
- Vim常用命令
- 求S=1+2+3+…+n之和,S<1000的最大n的值
- pip升级失败以及anaconda的pip/conda安装提示权限不够的错误的解决方案
- Java常考面试题7 深刻认识String、StringBuilder、StringBuffer
- 【跨进程】跨进程通信---BroadCast(广播)
- Removing stop words —— Python Data Science CookBook
- leetcode_middle_34_241. Different Ways to Add Parentheses
- Jetty 的工作原理
- android 获取SD卡相关信息
- 单元测试 Unit Test Generator
- 原码、反码、补码
- C# GDI+ 简单绘图 (三)
- C#输出格式
- 303. Range Sum Query - Immutable