用nltk colocation功能抽取中文短语和专业词汇增加分词准确性

来源:互联网 发布:想做淘宝客服怎么申请 编辑:程序博客网 时间:2024/04/29 07:06
#用nltk+jieba发现连词和三连词。
import jieba
import nltk
from nltk.collocations import *
train_corpus = "测试数据库,用户支付表,支付金额,支付用户,测试数据库,用户支付表,支付金额,支付用户"
bigram_measures = nltk.collocations.BigramAssocMeasures()
trigram_measures = nltk.collocations.TrigramAssocMeasures()

finder = BigramCollocationFinder.from_words(jieba.cut(train_corpus))
finder.apply_word_filter(lambda w: w.lower() in [',', '.', ',', '。'])
finder.nbest(bigram_measures.pmi, 10)

finder = TrigramCollocationFinder.from_words(jieba.cut(train_corpus))
finder.apply_word_filter(lambda w: w.lower() in [',', '.', ',', '。'])
finder.nbest(trigram_measures.pmi, 10)


#用gensim+jieba发现连词
import jieba
import gensim


mddesc = ['测试数据库','用户支付表','支付金额','支付用户']
train_corpus = []
for desc in mddesc:
train_corpus.append("/".join(jieba.cut(desc)).split("/"))
train_corpus.append("/".join(jieba.cut(desc)).split("/"))


#set the params(min_count, threshold) carefully when you use small corpus.
phrases = gensim.models.phrases.Phrases(train_corpus, min_count = 1, threshold=0.1)
bigram = gensim.models.phrases.Phraser(phrases)
input = "从用户支付表中选择支付金额大于5的用户。"
inputarr = "/".join(jieba.cut(input)).split("/")
repl = [s.replace("_","") for s in bigram[inputarr]]
print(repl)


参考:
https://radimrehurek.com/gensim/models/phrases.html
http://www.nltk.org/howto/collocations.html
http://blog.sina.com.cn/s/blog_630c58cb0100vkix.html
http://nullege.com/codes/search/nltk.metrics.TrigramAssocMeasures

结论:用nltk可方便地发现连词和三连词,用于发现常见搭配、专用词和新词。gensim有此功能,但有点麻烦,不是gensim的强项。
0 0
原创粉丝点击