文本分析-预处理:Python文本分析工具NLTK

来源:互联网 发布:java list分组 编辑:程序博客网 时间:2024/06/05 01:04

文本分析-预处理:Python文本分析工具NLTK

  • NLP领域中最常用的一个Python库
  • 开源项目
  • 自带分类、分词等功能
  • 强大的社区支持
pip install nltk#语料库安装 import nltk nltk.download()

典型文本预处理流程

20170828150390182356286.png

语料库

nltk.corpus

import nltkfrom nltk.corpus import brown # 需要下载brown语料库# 引用布朗大学的语料库# 查看语料库包含的类别print(brown.categories())# 查看brown语料库print('共有{}个句子'.format(len(brown.sents())))print('共有{}个单词'.format(len(brown.words())))

分词tokenize

句子拆分成具有语言语义学上意义的词

英文分词:单词之间是以空格作为自然分界符的

中文分词工具:结巴分词

sentence = "Python is a widely used high-level programming language for general-purpose programming."tokens = nltk.word_tokenize(sentence) # 需要下载punkt分词模型print(tokens)"""['Python', 'is', 'a', 'widely', 'used', 'high-level', 'programming', 'language', 'for', 'general-purpose', 'programming', '.']"""# 安装 pip install jiebaimport jiebaseg_list = jieba.cut("欢迎来到小象学院", cut_all=True)print("全模式: " + "/ ".join(seg_list))  # 全模式seg_list = jieba.cut("欢迎来到小象学院", cut_all=False)print("精确模式: " + "/ ".join(seg_list))  # 精确模式"""全模式: 欢迎/ 迎来/ 来到/ 小象/ 学院精确模式: 欢迎/ 来到/ 小/ 象/ 学院"""

词形归一化

词干提取(stemming)

look, looked, looking

  • 词干提取,如将ing, ed去掉,只保留单词主干

影响语料学习的准确度

# PorterStemmerfrom nltk.stem.porter import PorterStemmerporter_stemmer = PorterStemmer()print(porter_stemmer.stem('looked'))print(porter_stemmer.stem('looking')) #look# SnowballStemmerfrom nltk.stem import SnowballStemmersnowball_stemmer = SnowballStemmer('english')print(snowball_stemmer.stem('looked'))print(snowball_stemmer.stem('looking'))# LancasterStemmerfrom nltk.stem.lancaster import LancasterStemmerlancaster_stemmer = LancasterStemmer()print(lancaster_stemmer.stem('looked'))print(lancaster_stemmer.stem('looking'))

词形归并(lemmatization)

  • 将单词的各种词形归并成一种形式
from nltk.stem import WordNetLemmatizer # 需要下载wordnet语料库wordnet_lematizer = WordNetLemmatizer()print(wordnet_lematizer.lemmatize('cats'))print(wordnet_lematizer.lemmatize('boxes'))print(wordnet_lematizer.lemmatize('are'))print(wordnet_lematizer.lemmatize('went'))"""catboxarewent"""# 指明词性可以更准确地进行lemma# lemmatize 默认为名词print(wordnet_lematizer.lemmatize('are', pos='v'))print(wordnet_lematizer.lemmatize('went', pos='v'))"""bego"""

词性标注 (Part-Of-Speech)

import nltkwords = nltk.word_tokenize('Python is a widely used programming language.')print(nltk.pos_tag(words)) # 需要下载 averaged_perceptron_tagger"""[('Python', 'NNP'), ('is', 'VBZ'), ('a', 'DT'), ('widely', 'RB'), ('used', 'VBN'), ('programming', 'NN'), ('language', 'NN'), ('.', '.')]"""

去除停用词

为节省存储空间和提高搜索效率,NLP中会自动过滤掉某些字或词。

中文停用词表:
• 中文停用词库• 哈工大停用词表• 四川大学机器智能实验室停用词库• 百度停用词列表

使用NLTK去除停用词
stopwords.words()

from nltk.corpus import stopwords # 需要下载stopwordsfiltered_words = [word for word in words if word not in stopwords.words('english')]print('原始词:', words)print('去除停用词后:', filtered_words)"""原始词: ['Python', 'is', 'a', 'widely', 'used', 'programming', 'language', '.']去除停用词后: ['Python', 'widely', 'used', 'programming', 'language', '.']"""

典型的文本预处理流程代码

import nltkfrom nltk.stem import WordNetLemmatizer from nltk.corpus import stopwords# 原始文本raw_text = 'Life is like a box of chocolates. You never know what you\'re gonna get.'# 分词raw_words = nltk.word_tokenize(raw_text)# 词形归一化wordnet_lematizer = WordNetLemmatizer()words = [wordnet_lematizer.lemmatize(raw_word) for raw_word in raw_words]# 去除停用词filtered_words = [word for word in words if word not in stopwords.words('english')]print('原始文本:', raw_text)print('预处理结果:', filtered_words)"""原始文本: Life is like a box of chocolates. You never know what you're gonna get.预处理结果: ['Life', 'like', 'box', 'chocolate', '.', 'You', 'never', 'know', "'re", 'gon', 'na', 'get', '.']"""











原创粉丝点击