tensorflow 学习随记--最优化loss

来源:互联网 发布:晋中学院教务网络系统 编辑:程序博客网 时间:2024/05/21 07:04

1、得到loss的表达

2、构建一个optimizer。

Optimizer (基类)
GradientDescentOptimizer (常用)
AdagradOptimizer 
AdagradDAOptimizer 
MomentumOptimizer 
AdamOptimizer 
FtrlOptimizer 
RMSPropOptimizer


并确定learning_rate。可以设置global_step,优化器自动加1。设置这个变量可以记录优化的次数。也可以据此设置变化的learning_rate。

3、最后minimize(lo s s)

另外:tf.summary.scalar('loss', loss)是将loss设置为可视化。同样功能有summary.histogram、summary.image,这些操作输出的是各种summary protobuf,最后通过summary.writer写入到event文件中。最后Linux终端命令下,执行TensorBoard程序。


复制网址到浏览器中。可以看到loss 的走势。


example中的mnist.py。

def training(loss, learning_rate):  """Sets up the training Ops.  Creates a summarizer to track the loss over time in TensorBoard.  Creates an optimizer and applies the gradients to all trainable variables.  The Op returned by this function is what must be passed to the  `sess.run()` call to cause the model to train.  Args:    loss: Loss tensor, from loss().    learning_rate: The learning rate to use for gradient descent.  Returns:    train_op: The Op for training.  """  # Add a scalar summary for the snapshot loss.  tf.summary.scalar('loss', loss)  # Create the gradient descent optimizer with the given learning rate.  optimizer = tf.train.GradientDescentOptimizer(learning_rate)  # Create a variable to track the global step.  global_step = tf.Variable(0, name='global_step', trainable=False)  # Use the optimizer to apply the gradients that minimize the loss  # (and also increment the global step counter) as a single training step.  train_op = optimizer.minimize(loss, global_step=global_step)  return train_op


原创粉丝点击