L2正则化—tensorflow实现

来源:互联网 发布:网络畅销书排行榜 编辑:程序博客网 时间:2024/06/06 06:37

L2正则化是一种减少过拟合的方法,在损失函数中加入刻画模型复杂程度的指标。假设损失函数是J(θ),则优化的是J(θ)+λR(w)R(w)=ni=0|w2i|

在tensorflow中的具体实现过程如下:

#coding:utf-8import tensorflow as tfdef get_weight(shape,lambda):    var = tf.Variable(tf.random_normal(shape),dtype=tf.float32)    tf.add_to_collection("losses",tf.contrib.layers.l2_regularizer(lambda)(var))#把正则化加入集合losses里面    return varx = tf.placeholder(tf.float32,shape=(None,2))y_ = tf.placeholder(tf.float32,shape=(none,1))#真值batcg_size = 8layer_dimension = [2,10,10,10,1]#神经网络层节点的个数n_layers = len(layer_dimension)#神经网络的层数cur_layer = xin_dimension = layer_dimension[0]for i in range (1,n_layers):    out_dimension = layer_dimension[i]    weight = get_weight([in_dimension,out_dimension],0.001)    bias = tf.Variable(tf.constant(0.1,shape(out_dimension)))    cur_layer = tf.nn.relu(tf.matmul(x,weight)) + bias)    in_dimension = layer_dimension[i]ses_loss = tf.reduce_mean(tf.square(y_ - cur_layer))#计算最终输出与标准之间的losstf.add_to_collenction("losses",ses_loss)#把均方误差也加入到集合里loss = tf.add_n(tf.get_collection("losses"))#tf.get_collection返回一个列表,内容是这个集合的所有元素#add_n()把输入按照元素相加
原创粉丝点击