tensorflow 出现的奇怪错误

来源:互联网 发布:javascript 15天写好的 编辑:程序博客网 时间:2024/05/21 07:55

其实是最简单的bidirectional_dynamic_rnn做命名实体识别问题:
搭的最基本的框架(tensorflow0.11版本):

class Bilstm_Model:    def __init__(self, biconfig, embedId, embedding):        self.biconfig = biconfig        self.embedId = embedId        self.embedding = np.array(embedding)        # bilstm的 input 和 output        self.inputs = tf.placeholder(tf.int32, [None, self.biconfig.num_steps, self.biconfig.input_size])        self.length = tf.reduce_sum(tf.sign(self.inputs), reduction_indices=1)        self.length = tf.cast(self.length, tf.int32)        #self.inputs = tf.nn.embedding_lookup(self.embedding, self.inputs)        self.targets = tf.placeholder(tf.int32, [None, self.biconfig.num_steps, self.biconfig.num_class])        # Forward direction cell, Backward direction cell        lstm_fw_cell = rnn_cell.BasicLSTMCell(self.biconfig.lstm_ht, state_is_tuple=True)        lstm_bw_cell = rnn_cell.BasicLSTMCell(self.biconfig.lstm_ht, state_is_tuple=True)        ini_fw_state = lstm_fw_cell.zero_state(self.biconfig.batch_size, dtype=tf.float32)        ini_bw_state = lstm_bw_cell.zero_state(self.biconfig.batch_size, dtype=tf.float32)        # ini_fw_state = tf.placeholder("float", [None, 2 * self.biconfig.lstm_ht])        # ini_bw_state = tf.placeholder("float", [None, 2 * self.biconfig.lstm_ht])        #self.ini_fw_state = tf.placeholder(tf.float32, [None, 2 * self.biconfig.lstm_ht])        #self.ini_bw_state = tf.placeholder(tf.float32, [None, 2 * self.biconfig.lstm_ht])        # A tuple (outputs, output_states) where:        #outputs: A tuple (output_fw, output_bw) containing the forward and the backward        outputs, _ = rnn.bidirectional_dynamic_rnn(lstm_fw_cell, lstm_bw_cell, self.inputs,                                                   initial_state_fw=ini_fw_state,                                                   initial_state_bw=ini_bw_state,                                                   sequence_length=self.length, dtype=tf.float32)        bilstm_outputs = tf.concat(2, [outputs[0], outputs[1]])        self.logit = self.forward(bilstm_outputs)        self.sent_loss = self.loss(self.logit, self.targets, self.length)        self.optimizer = tf.train.GradientDescentOptimizer(self.biconfig.lr).minimize(self.loss)

这里写图片描述
这里写图片描述
按照上述给出的代码搭建bilstm框架,就会出现很多很多错误,给出两个图,图一就是是代码44行代码有问题,第二图其实错误特别长,我截取了一部分,就是tensorflow源代码的错误一大推,其实源代码的错误可以忽略,就是自己搭建框架的时候参数定义错误或参入的参数是错误导致的,对代码进行修改后,错误消失,新代码如下:

class Bilstm_Model:    def __init__(self, biconfig, embedId, embedding):        self.biconfig = biconfig        self.embedId = embedId        self.embedding = np.array(embedding)        # bilstm的 input 和 output        self.inputs = tf.placeholder(tf.float32, [None, self.biconfig.num_steps, self.biconfig.input_size])        self.length = tf.placeholder(tf.int32, [None])        #self.inputs = tf.nn.embedding_lookup(self.embedding, self.inputs)        self.targets = tf.placeholder(tf.int32, [None, self.biconfig.num_steps, self.biconfig.num_class])        # Forward direction cell, Backward direction cell        lstm_fw_cell = rnn_cell.BasicLSTMCell(self.biconfig.lstm_ht, state_is_tuple=True)        lstm_bw_cell = rnn_cell.BasicLSTMCell(self.biconfig.lstm_ht, state_is_tuple=True)        ini_fw_state = lstm_fw_cell.zero_state(self.biconfig.batch_size, dtype=tf.float32)        ini_bw_state = lstm_bw_cell.zero_state(self.biconfig.batch_size, dtype=tf.float32)        # ini_fw_state = tf.placeholder("float", [None, 2 * self.biconfig.lstm_ht])        # ini_bw_state = tf.placeholder("float", [None, 2 * self.biconfig.lstm_ht])        #self.ini_fw_state = tf.placeholder(tf.float32, [None, 2 * self.biconfig.lstm_ht])        #self.ini_bw_state = tf.placeholder(tf.float32, [None, 2 * self.biconfig.lstm_ht])        # A tuple (outputs, output_states) where:        #outputs: A tuple (output_fw, output_bw) containing the forward and the backward        outputs, _ = rnn.bidirectional_dynamic_rnn(lstm_fw_cell, lstm_bw_cell, self.inputs,                                                   initial_state_fw=ini_fw_state,                                                   initial_state_bw=ini_bw_state,                                                   sequence_length=self.length, dtype=tf.float32)        bilstm_outputs = tf.concat(2, [outputs[0], outputs[1]])        self.logit = self.forward(bilstm_outputs)        self.sent_loss = self.loss(self.logit, self.targets, self.length)        self.optimizer = tf.train.GradientDescentOptimizer(self.biconfig.lr).minimize(self.loss)

修改的地方有两点:
(1)self.inputs中tf.int32修改为tf.float32, 因为定义的inputs是embedding_lookup后的代码,所以改为tf.float32
(2) self.length的定义更改,在bidirectional_dynamic_rnn函数的参数是sequence_length需要的句子的实际长度,如果按照原先的长度没办法获得句子的实际长度

第一处错误修改参考资料:
http://stackoverflow.com/questions/38695086/tensorflow-basic-rnn-seq2seq-typeerror-expected-int32-got-0-1-of-type-float

0 0
原创粉丝点击