tf.train
来源:互联网 发布:hyperion数据 编辑:程序博客网 时间:2024/05/21 17:59
tf.train
前言
tf.train属于Training类,一般用于梯度的计算。由于使用缘故探究下tf.train的简单用法。
官方
1.tf.train.GradientDescentOptimizer
Optimizer that implements the gradient descent algorithm.
tf.train.GradientDescentOptimizer.__init__(learning_rate, use_locking=False, name='GradientDescent')
Construct a new gradient descent optimizer.
Args:
learning_rate
: A Tensor or a floating point value. The learning rate to use.use_locking
: If True use locks for update operation.sname
: Optional name prefix for the operations created when applying gradients. Defaults to “GradientDescent”.
Returns:
An Operation that updates the variables in ‘var_list’. If ‘global_step’ was not None, that operation also increments global_step.
2.tf.train.Optimizer.minimize
tf.train.Optimizer.minimize(loss, global_step=None, var_list=None, gate_gradients=1, name=None)
Add operations to minimize ‘loss’ by updating ‘var_list’.
This method simply combines calls compute_gradients() and apply_gradients(). If you want to process the gradient before applying them call compute_gradients() and apply_gradients() explicitly instead of using this function.
Args:
loss
: A Tensor containing the value to minimize.global_step
: Optional Variable to increment by one after the variables have been updated.var_list
: Optional list of variables.Variable to update to minimize ‘loss’. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES.gate_gradients
: How to gate the computation of gradients. Can be GATE_NONE, GATE_OP, or GATE_GRAPH.name
: Optional name for the returned operation.
例子
import tensorflow as tfimport numpy as np# Creat dataX_Data = np.random.rand(100).astype(np.float32)Y_Data = X_Data*0.1 + 0.3# Creat TF Structure Start #Weight = tf.random_uniform([1], -1.0, 1.0)biases = tf.Variable(tf.zeros([1]))y = Weight*X_Data + biasesloss = tf.reduce_mean(tf.square(y-Y_Data))optimizer = tf.train.GradientDescentOptimizer(0.5)train = optimizer.minimize(loss)init = tf.global_variables_initializer()#Creat TF Structure Endsess = tf.Session()sess.run(init) # Importantfor step in range(201): sess.run(train) if step%20 == 0: print(step, sess.run(Weight), sess.run(biases))
结果
理解
tf.train.Optimizer.minimize
中参数loss一般用tf.reduce_mean
计算出来
Reference
- 官方文档:Training
阅读全文
0 0
- tf.train
- Tensorflow:tf.train.SyncReplicasOptimizer
- tf.train.Saver
- tf.train.slice_input_produce命令
- tf.train.Saver
- tf.train.ExponentialMovingAverage解析
- tf.train.SummaryWriter()
- tf.train.shuffle_batch
- tf.train.ExponentialMovingAverage用法
- tf.train.exponential_decay 用法
- tf.train.batch()
- class tf.train.Saver
- tf.train.Optimizer.minimize
- tf.train.exponential_decay()
- tf.train.noisy_linear_cosine_decay
- tensorflow tf.train.SummaryWriter()
- tf.train.batch和tf.train.shuffle_batch的用法
- tf.train.batch和tf.train.shuffle_batch的用法
- ACE框架解读
- 关于Y400 电源管理 Win10解决方案
- Linux 命令学习_无名小仙男
- 自定义轮播
- 【Java高并发学习】JDK内部锁优化策略概要
- tf.train
- AJAX 入门
- 用两个线程玩猜数字游戏
- ACE网络编程模式比较
- Storm ack容错机制案例
- WebView的那些坑
- Hadoop完全分布式安装
- ACE框架简介以及一个基于ACE的C/S服务程序实例
- java