TF随笔-7

来源:互联网 发布:code.org 是什么软件 编辑:程序博客网 时间:2024/06/14 09:21

求平均值的函数

reduce_mean

axis为1表示求行
axis为0表示求列
>>> xxx=tf.constant([[1., 10.],[3.,30.]])
>>> sess.run(xxx)
array([[  1.,  10.],
       [  3.,  30.]], dtype=float32)
>>> mymean=tf.reduce_mean(xxx,0)
>>> sess.run(mymean)
array([  2.,  20.], dtype=float32)
>>> mymean=tf.reduce_mean(xxx,1)
>>> sess.run(mymean)
array([  5.5,  16.5], dtype=float32)
>>> 
keep_dims表示是否保持维度。
>>> mymean=tf.reduce_mean(xxx,axis=0,keep_dims=True)
>>> sess.run(mymean)
array([[  2.,  20.]], dtype=float32)
>>> mymean=tf.reduce_mean(xxx,axis=0,keep_dims=False)
>>> sess.run(mymean)
array([  2.,  20.], dtype=float32)
>>> mymean=tf.reduce_mean(xxx,keep_dims=False)
>>> sess.run(mymean)
11.0
>>> mymean=tf.reduce_mean(xxx,keep_dims=True)
>>> sess.run(mymean)
array([[ 11.]], dtype=float32)
>>> mymean=tf.reduce_mean(xxx)
>>> sess.run(mymean)
11.0


tf.reduce_mean

reduce_mean(
    input_tensor
,
    axis
=None,
    keep_dims
=False,
    name
=None,
    reduction_indices
=None
)

Defined in tensorflow/python/ops/math_ops.py.

See the guide: Math > Reduction

Computes the mean of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keep_dims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keep_dims is true, the reduced dimensions are retained with length 1.

If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned.

For example:

# 'x' is [[1., 1.]
#         [2., 2.]]
tf
.reduce_mean(x) ==> 1.5
tf
.reduce_mean(x, 0) ==> [1.5, 1.5]
tf
.reduce_mean(x, 1) ==> [1.,  2.]


Args:

  • input_tensor: The tensor to reduce. Should have numeric type.
  • axis: The dimensions to reduce. If None (the default), reduces all dimensions.
  • keep_dims: If true, retains reduced dimensions with length 1.
  • name: A name for the operation (optional).
  • reduction_indices: The old (deprecated) name for axis.

tf.pow

pow(
    x
,
    y
,
    name
=None
)

Defined in tensorflow/python/ops/math_ops.py.

See the guide: Math > Basic Math Functions

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \\(x^y\\) for corresponding elements in x and y. For example:

# tensor 'x' is [[2, 2], [3, 3]]
# tensor 'y' is [[8, 16], [2, 3]]
tf
.pow(x, y) ==> [[256, 65536], [9, 27]]

class tf.train.AdamOptimizer


__init__(learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-08, use_locking=False, name='Adam')


线性分类源码:
#!/usr/bin/env python2# -*- coding: utf-8 -*-"""Created on Mon Jul 10 09:35:04 2017@author: myhaspl@myhaspl.com,http://blog.csdn.net/myhaspl"""import tensorflow as tfimport numpy as npbatch_size=10w1=tf.Variable(tf.random_normal([2,3],stddev=1,seed=1))w2=tf.Variable(tf.random_normal([3,1],stddev=1,seed=1))x=tf.placeholder(tf.float32,shape=(None,2),name="x")y=tf.placeholder(tf.float32,shape=(None,1),name="y")h=tf.matmul(x,w1)yo=tf.matmul(h,w2)#损失函数计算差异平均值cross_entropy=tf.reduce_mean(tf.abs(y-yo))#反向传播train_step=tf.train.AdamOptimizer().minimize(cross_entropy)#生成200个随机样本DATASIZE=200x_=np.random.rand(DATASIZE,2)y_=[[int((x1+x2)>2.5)] for (x1,x2) in x_]with tf.Session() as sess:    #初始化变量    init_op=tf.global_variables_initializer()    sess.run(init_op)    print sess.run(w1)    print sess.run(w2)        #设定训练轮数    TRAINCOUNT=10000    for i in range(TRAINCOUNT):        #每次递进选择一组        start=(i*batch_size) % DATASIZE        end=min(start+batch_size,DATASIZE)        #开始训练        sess.run(train_step,feed_dict={x:x_[start:end],y:y_[start:end]})        if i%1000==0:            total_cross_entropy=sess.run(cross_entropy,feed_dict={x:x_[start:end],y:y_[start:end]})            print("%d 次训练之后,损失:%g"%(i+1,total_cross_entropy))    print(sess.run(w1))    print(sess.run(w2))    



    [[-0.81131822  1.48459876  0.06532937 -2.4427042   0.0992484   0.59122431]
 [ 0.59282297 -2.12292957 -0.72289723 -0.05627038  0.64354479 -0.26432407]]
[[-0.81131822]
 [ 1.48459876]
 [ 0.06532937]
 [-2.4427042 ]
 [ 0.0992484 ]
 [ 0.59122431]]
1 次训练之后,损失:2.37311
1001 次训练之后,损失:0.587702
2001 次训练之后,损失:0.00187977
3001 次训练之后,损失:0.000224713
4001 次训练之后,损失:0.000245593
5001 次训练之后,损失:0.000837345
6001 次训练之后,损失:0.000561878
7001 次训练之后,损失:0.000521504
8001 次训练之后,损失:0.000369141
9001 次训练之后,损失:2.88023e-05
[[-0.40749896  0.74481744 -1.35231423 -1.57555723  1.5161525   0.38725093]
 [ 0.84865922 -2.07912779 -0.41053897 -0.21082011 -0.0567192  -0.69210052]]
[[ 0.36143586]
 [ 0.34388798]
 [ 0.79891819]
 [-1.57640576]
 [-0.86542428]
 [-0.51558757]]

tf.nn.relu

relu(
    features
,
    name
=None
)

Defined in tensorflow/python/ops/gen_nn_ops.py.

See the guides: Layers (contrib) > Higher level ops for building neural network layersNeural Network > Activation Functions

Computes rectified linear: max(features, 0)





原创粉丝点击