[DL]基于Pytorch的N-gram Language Model
来源:互联网 发布:xnview mac 编辑:程序博客网 时间:2024/06/07 03:23
import torchimport torch.autograd as autogradimport torch.nn as nnimport torch.nn.functional as Fimport torch.optim as optimtorch.manual_seed(1)word2index={'a':0,'b':1}embeds=nn.Embedding(2,5)a_embed=embeds(autograd.Variable(torch.LongTensor([word2index['a']])))print(a_embed)CONTEXT_SIZE = 2EMBEDDING_DIM = 10# We will use Shakespeare Sonnet 2test_sentence = """When forty winters shall besiege thy brow,And dig deep trenches in thy beauty's field,Thy youth's proud livery so gazed on now,Will be a totter'd weed of small worth held:Then being asked, where all thy beauty lies,Where all the treasure of thy lusty days;To say, within thine own deep sunken eyes,Were an all-eating shame, and thriftless praise.How much more praise deserv'd thy beauty's use,If thou couldst answer 'This fair child of mineShall sum my count, and make my old excuse,'Proving his beauty by succession thine!This were to be new made when thou art old,And see thy blood warm when thou feel'st it cold.""".split()# we should tokenize the input, but we will ignore that for now# build a list of tuples. Each tuple is ([ word_i-2, word_i-1 ], target word)trigrams = [([test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2]) for i in range(len(test_sentence) - 2)]# print the first 3, just so you can see what they look likeprint(trigrams[:3])vocab = set(test_sentence)word_to_ix = {word: i for i, word in enumerate(vocab)}class NGramLanguageModeler(nn.Module): def __init__(self, vocab_size, embedding_dim, context_size): super(NGramLanguageModeler, self).__init__() self.embeddings = nn.Embedding(vocab_size, embedding_dim) self.linear1 = nn.Linear(context_size * embedding_dim, 128) self.linear2 = nn.Linear(128, vocab_size) def forward(self, inputs): print('inputs:',inputs) embeds = self.embeddings(inputs).view((1, -1)) out = F.relu(self.linear1(embeds)) out = self.linear2(out) log_probs = F.log_softmax(out) return log_probslosses = []loss_function = nn.NLLLoss()model = NGramLanguageModeler(len(vocab), EMBEDDING_DIM, CONTEXT_SIZE)optimizer = optim.SGD(model.parameters(), lr=0.001)for epoch in range(10): total_loss = torch.Tensor([0]) for context, target in trigrams: # Step 1. Prepare the inputs to be passed to the model (i.e, turn the words # into integer indices and wrap them in variables) context_idxs = [word_to_ix[w] for w in context] context_var = autograd.Variable(torch.LongTensor(context_idxs)) # Step 2. Recall that torch *accumulates* gradients. Before passing in a # new instance, you need to zero out the gradients from the old # instance model.zero_grad() # Step 3. Run the forward pass, getting log probabilities over next # words log_probs = model(context_var) # Step 4. Compute your loss function. (Again, Torch wants the target # word wrapped in a variable) loss = loss_function(log_probs, autograd.Variable( torch.LongTensor([word_to_ix[target]]))) # Step 5. Do the backward pass and update the gradient loss.backward() optimizer.step() total_loss += loss.data losses.append(total_loss)print(losses) # The loss decreased every iteration over the training data!tensor=torch.LongTensor(2)tensor[0]=1tensor[1]=2print(torch.max(model(torch.autograd.Variable(tensor)),1))
阅读全文
0 0
- [DL]基于Pytorch的N-gram Language Model
- [DL]基于Pytorch的Linear classified model
- NLP:language model(n-gram/Word2Vec/Glove)
- [DL]基于Pytorch的seq2seq模型
- [DL]基于pytorch的Elman RNN语言模型
- N-Gram的数据结构
- 基于词表和N-gram算法的新词识别实验
- 基于词表和N-gram算法的新词识别实验
- 基于统计的N-gram模型命名实体识别
- 基于N-gram的双向最大匹配中文分词
- 基于N-gram的双向最大匹配中文分词
- N-gram模型的优缺点
- N-gram模型(基于词表)
- n-gram python实现(基于sklearn)
- n-Gram
- N-gram
- N-gram
- n-gram
- 【代码练习1】一个汽车销售的小案例
- spark学习-48-Spark的event事件监听器LiveListenerBus和特质SparkListenerBus以及特质ListenerBus
- spark mllib window运行demo 抛异常NativeSystemBLAS
- Windowsoftinputmode属性总结
- Word文档打印时,出现“错误!未找到引用源。”
- [DL]基于Pytorch的N-gram Language Model
- ArcGIS Api for JavaScript/Webappbuilder 去掉Esri logo
- 欢迎使用CSDN-markdown编辑器
- Hadoop进阶
- WebMagic爬取网站内容
- 怎么处理stdClass::__set_state
- 你应该切换到Kotlin开发
- 应用程序服务器与web服务器
- 问题总结···整型包装类的大小比较问题