ValueError: Variable E_conv0/w/Adam/ does not exist, or was not created with tf.get_variable().

来源:互联网 发布:淘宝余潇潇是谁 编辑:程序博客网 时间:2024/05/16 11:04

运行tf.train.AdamOptimizer()函数,例如下面代码:

 self.EG_optimizer = tf.train.AdamOptimizer(            learning_rate=EG_learning_rate,            beta1=beta1        ).minimize(            loss=self.loss_EG,            global_step=self.EG_global_step,            var_list=self.E_variables + self.G_variables        )


出现错误:

ValueError: Variable E_conv0/w/Adam/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope?


解决方法:


在函数第一行添加:


with tf.variable_scope("encoder") as scope:



例如原encoder()函数代码为:

def encoder(self, image, reuse_variables=False):               if reuse_variables:            tf.get_variable_scope().reuse_variables()        num_layers = int(np.log2(self.size_image)) - int(self.size_kernel / 2)        current = image        # conv layers with stride 2        for i in range(num_layers):            name = 'E_conv' + str(i)            current = conv2d(                    input_map=current,                    num_output_channels=self.num_encoder_channels * (2 ** i),                    size_kernel=self.size_kernel,                    name=name                )            current = tf.nn.relu(current)        # fully connection layer        name = 'E_fc'        current = fc(            input_vector=tf.reshape(current, [self.size_batch, -1]),            num_output_length=self.num_z_channels,            name=name        )        # output        return tf.nn.tanh(current)

修改后为:

  def encoder(self, image, reuse_variables=False):        with tf.variable_scope("encoder") as scope:            if reuse_variables:                tf.get_variable_scope().reuse_variables()            num_layers = int(np.log2(self.size_image)) - int(self.size_kernel / 2)            current = image            # conv layers with stride 2            for i in range(num_layers):                name = 'E_conv' + str(i)                current = conv2d(                        input_map=current,                        num_output_channels=self.num_encoder_channels * (2 ** i),                        size_kernel=self.size_kernel,                        name=name                    )                current = tf.nn.relu(current)            # fully connection layer            name = 'E_fc'            current = fc(                input_vector=tf.reshape(current, [self.size_batch, -1]),                num_output_length=self.num_z_channels,                name=name            )            # output            return tf.nn.tanh(current)





阅读全文
0 0