NLTK使用

来源:互联网 发布:文化部对网络直播 编辑:程序博客网 时间:2024/05/20 13:15

Sentence Tokenize(分割句子)

1、直接使用sent_tokenize

from sklearn.datasets import fetch_20newsgroupsnews = fetch_20newsgroups(subset='train')X,y = news.data,news.targettext = X[0]from nltk.tokenize import sent_tokenizesent_tokenize_list = sent_tokenize(text)print(sent_tokenize_list)
2、使用nltk.tokenize.punkt中包含了很多预先训练好的tokenize模型。

from sklearn.datasets import fetch_20newsgroupsnews = fetch_20newsgroups(subset='train')X,y = news.data,news.targetprint(X[0])news = X[0]from bs4 import BeautifulSoupimport nltk,renews_text = BeautifulSoup(news).get_text()print(news_text)tokenizer=nltk.data.load('tokenizers/punkt/english.pickle')raw_sentences=tokenizer.tokenize(news_text)print(raw_sentence)

Word Tokenize(分割单词)

1.使用word_tokenize

from nltk.tokenize import word_tokenizetext='The cat is walking in the bedroom.'sent_tokenize_list = word_tokenize(text)print(sent_tokenize_list)


Part-Of-Speech Tagging and POS Tagger(对词进行标注)

from nltk.tokenize import word_tokenizetext='The cat is walking in the bedroom.'sent_tokenize_list = word_tokenize(text)print(sent_tokenize_list) pos_tag = nltk.pos_tag(sent_tokenize_list)print(pos_tag)

Stemming(提取词干)

import nltksent1='The cat is walking in the bedroom.'sent2='A dog was running across the kitchen.'tokens_1=nltk.word_tokenize(sent1)print (tokens_1)stemmer = nltk.stem.PorterStemmer()stem_1 = [stemmer.stem(t) for t in tokens_1]print(stem_1)



0 0
原创粉丝点击