TF day 5 神经网络的优化

来源:互联网 发布:115 会员 淘宝 下架 编辑:程序博客网 时间:2024/05/04 08:03

主要内容:

  • 反向传播backpropagation和梯度下降gradient decent
  • 学习率learning rate的设置:指数衰减法
  • 过拟合问题:正则化regularization

一. 优化算法

反向传播和梯度下降是神经网络的核心,用来调整神经网络中的参数的取值。
具体细节可参考:
- learning representations by back-paopagating errors[M] Rumelhart D E, Hinton G E, Williams R J
- 神经网络python实现

pycharm 提供的优化器,有什么区别以后用到再看~
这里写图片描述

train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)

接下来两部分就是learning_rate 和 loss 的设置~

二、学习率的设置
随着迭代的继续逐步减小学习率,其原理就是:
decayed_learning_rate = learning_rate *decay_rate ^ (global_step / decay_step)
参数:
learning_rate:表示初始学习率
decay_rate:是衰减率
global_step:从0开始,每batch一次,增加1

global_step refer to the number of batches seen by the graph. Everytime a batch is provided, the weights are updated in the direction that minimizes the loss. global_step just keeps track of the number of batches seen so far. When it is passed in the minimize() argument list, the variable is increased by one.
https://stackoverflow.com/questions/41166681/what-does-tensorflow-global-step-mean

decay_step:衰减速度
staircase:设置为True时,global_step/decay_step 会被转化为整数,学习率为阶梯状

##学习率指数衰减learning_rate = 0.01global_step = tf.Variable(0,trainable=False)##阶梯状衰减学习率learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=True)##连续状衰减学习率learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=False)

三、过拟合问题

正则化的原理还是不很理解 - -

##计算正则化损失函数regularizer = tf.contrib.layers.l2_regularizer(regularizer_rate)regularization = regularizer(w1)+regularizer(w2)##总损失函数loss = cross_entroy + regularization

华软的项目:

import tensorflow as tfimport pandas as pdimport numpy as np#datadf = pd.read_excel('/home/pan-xie/PycharmProjects/ML/jxdc/打分/项目评分表last.xls')X = df.iloc[:,2:].fillna(df.mean())# print(X.shape)  ##(449,84)df_label = df.iloc[:,2]Y = []for i in df_label:    if i =='A':        Y.append([1,0,0,0])    elif i =='B':        Y.append([0,1,0,0])    elif i =='C':        Y.append([0,0,1,0])    else:        Y.append([0,0,0,1])# print(np.array(Y).shape)  ##(449,4)#配置神经网络参数batch_size = 20STEPS = 2000regularizer_rate = 0.001##定义神经网络的参数w1 = tf.Variable(tf.random_normal([84,150]))w2 = tf.Variable(tf.random_normal([150,4]))biases1 = tf.Variable(tf.zeros(shape=[150]))biases2 = tf.Variable(tf.zeros(shape=[4]))##输入层x = tf.placeholder(tf.float32,shape=[None,84],name='x-input')y_ = tf.placeholder(tf.float32,shape=[None,4],name='y-input') ##真实值labels# ##定义神经网络前向传播的过程a = tf.nn.tanh(tf.matmul(x,w1)+biases1)y = tf.nn.tanh(tf.matmul(a,w2)+biases2)  ##预测值 logits###定义损失函数cross_entroy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y,labels=y_))##计算正则化损失函数regularizer = tf.contrib.layers.l2_regularizer(regularizer_rate)regularization = regularizer(w1)+regularizer(w2)##总损失函数loss = cross_entroy + regularization##学习率指数衰减learning_rate = 0.01global_step = tf.Variable(0,trainable=False)# learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=True)learning_rate = tf.train.exponential_decay(learning_rate,global_step=global_step,decay_steps=100,decay_rate=0.96,staircase=False)##用梯度下降来优化损失函数train_step = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss,global_step=global_step)##准确率计算correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))  ###argmax返回的是索引accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))data_size = len(df)with tf.Session() as sess:    tf.global_variables_initializer().run()    # print(sess.run(w1))    # print(sess.run(w2))    for i in range(STEPS):        start = (i * batch_size) % data_size        end = min(start + batch_size, data_size)        ##通过选取的样本训练神经网络并更新参数        sess.run(train_step, feed_dict={x: X[start:end+1], y_: Y[start:end+1]})        if i % 100 == 0:            accuracy_,loss_ = sess.run([accuracy,cross_entroy],feed_dict={x: X[start:end+1], y_: Y[start:end+1]})            learning_rate_,global_step_ = sess.run([learning_rate,global_step])            print("after %d training steps,"" accuracy is %g,""loss is %g"%(i,accuracy_,loss_))            print("learning rate is %g,""global_step is %g"%(learning_rate_,global_step_))    # print(sess.run(w1))    # print(sess.run(w2))

运行结果:

after 0 training steps, accuracy is 0.428571,loss is 1.24586after 100 training steps, accuracy is 0.952381,loss is 0.437043after 200 training steps, accuracy is 1,loss is 0.341079after 300 training steps, accuracy is 1,loss is 0.343718after 400 training steps, accuracy is 1,loss is 0.340793after 500 training steps, accuracy is 1,loss is 0.34124after 600 training steps, accuracy is 1,loss is 0.341603after 700 training steps, accuracy is 1,loss is 0.34144after 800 training steps, accuracy is 1,loss is 0.34089after 900 training steps, accuracy is 1,loss is 0.342235after 1000 training steps, accuracy is 1,loss is 0.341608after 1100 training steps, accuracy is 1,loss is 0.340753after 1200 training steps, accuracy is 1,loss is 0.343047after 1300 training steps, accuracy is 1,loss is 0.340775after 1400 training steps, accuracy is 1,loss is 0.341164after 1500 training steps, accuracy is 1,loss is 0.340755after 1600 training steps, accuracy is 1,loss is 0.340856after 1700 training steps, accuracy is 1,loss is 0.341183after 1800 training steps, accuracy is 1,loss is 0.340931after 1900 training steps, accuracy is 1,loss is 0.340826
learning rate is 0.00999592,global_step is 1learning rate is 0.00959608,global_step is 101learning rate is 0.00921224,global_step is 201learning rate is 0.00884375,global_step is 301learning rate is 0.00849,global_step is 401learning rate is 0.0081504,global_step is 501learning rate is 0.00782438,global_step is 601learning rate is 0.00751141,global_step is 701learning rate is 0.00721095,global_step is 801learning rate is 0.00692251,global_step is 901learning rate is 0.00664561,global_step is 1001learning rate is 0.00637979,global_step is 1101learning rate is 0.0061246,global_step is 1201learning rate is 0.00587961,global_step is 1301learning rate is 0.00564443,global_step is 1401learning rate is 0.00541865,global_step is 1501learning rate is 0.0052019,global_step is 1601learning rate is 0.00499383,global_step is 1701learning rate is 0.00479407,global_step is 1801learning rate is 0.00460231,global_step is 1901
原创粉丝点击