TF/02_TensorFlow_Way/03 Working with Multiple Layers04 Implementing Loss Functions
来源:互联网 发布:c语言scanf 编辑:程序博客网 时间:2024/05/18 20:13
03 Working with Multiple Layers
Summary
In this script, we will perform a 1D spatial moving average across a vector. Then we will create a “custom” operation by multiplying the output by a specific matrix:
The Spatial Moving Window Layer
We will create a layer that takes a spatial moving window average. Our window will be 2x2 with a stride of 2 for height and width. The filter value will be 0.25 because we want the average of the 2x2 window
my_filter = tf.constant(0.25, shape=[2, 2, 1, 1])my_strides = [1, 2, 2, 1]mov_avg_layer= tf.nn.conv2d(x_data, my_filter, my_strides, padding='SAME', name='Moving_Avg_Window')
Custom Layer
We create a custom layer which will be sigmoid(Ax+b) where x is a 2x2 matrix and A and b are 2x2 matrices.
output = sigmoid( input * A + b )
Computational Graph Output
Viewing the computational graph in Tensorboard:
# Working with Multiple Layersimport matplotlib.pyplot as pltimport numpy as npimport tensorflow as tfimport osfrom tensorflow.python.framework import opsops.reset_default_graph()# Create graphsess = tf.Session()# Create tensors# Create a small random 'image' of size 4x4x_shape = [1, 4, 4, 1]x_val = np.random.uniform(size=x_shape)x_data = tf.placeholder(tf.float32, shape=x_shape)# Create a layer that takes a spatial moving window average# Our window will be 2x2 with a stride of 2 for height and width# The filter value will be 0.25 because we want the average of the 2x2 windowmy_filter = tf.constant(0.25, shape=[2, 2, 1, 1])my_strides = [1, 2, 2, 1]mov_avg_layer= tf.nn.conv2d(x_data, my_filter, my_strides, padding='SAME', name='Moving_Avg_Window')# Define a custom layer which will be sigmoid(Ax+b) where# x is a 2x2 matrix and A and b are 2x2 matricesdef custom_layer(input_matrix): input_matrix_sqeezed = tf.squeeze(input_matrix) A = tf.constant([[1., 2.], [-1., 3.]]) b = tf.constant(1., shape=[2, 2]) temp1 = tf.matmul(A, input_matrix_sqeezed) temp = tf.add(temp1, b) # Ax + b return(tf.sigmoid(temp))# Add custom layer to graphwith tf.name_scope('Custom_Layer') as scope: custom_layer1 = custom_layer(mov_avg_layer)# The output should be an array that is 2x2, but size (1,2,2,1)#print(sess.run(mov_avg_layer, feed_dict={x_data: x_val}))# After custom operation, size is now 2x2 (squeezed out size 1 dims)print(sess.run(custom_layer1, feed_dict={x_data: x_val}))merged = tf.summary.merge_all(key='summaries')if not os.path.exists('/tensorboard_logs/'): os.makedirs('/tensorboard_logs/')my_writer = tf.summary.FileWriter('/tensorboard_logs/', sess.graph)
[[ 0.92834818 0.90523791] [ 0.84017658 0.88880724]]
04 Implementing Loss Functions
Summary
In this script, we will show how to implement different loss functions in tensorflow
Plots of the Loss Functions
The output of the script in this section plots the various loss functions:
# 04_loss_functions.py# Loss Functions#----------------------------------## This python script illustrates the different# loss functions for regression and classification.import matplotlib.pyplot as pltimport tensorflow as tffrom tensorflow.python.framework import opsops.reset_default_graph()# Create graphsess = tf.Session()###### Numerical Predictions ######x_vals = tf.linspace(-1., 1., 500)target = tf.constant(0.)# L2 loss# L = (pred - actual)^2l2_y_vals = tf.square(target - x_vals)l2_y_out = sess.run(l2_y_vals)# L1 loss# L = abs(pred - actual)l1_y_vals = tf.abs(target - x_vals)l1_y_out = sess.run(l1_y_vals)# Pseudo-Huber loss# L = delta^2 * (sqrt(1 + ((pred - actual)/delta)^2) - 1)delta1 = tf.constant(0.25)phuber1_y_vals = tf.multiply(tf.square(delta1), tf.sqrt(1. + tf.square((target - x_vals)/delta1)) - 1.)phuber1_y_out = sess.run(phuber1_y_vals)delta2 = tf.constant(5.)phuber2_y_vals = tf.multiply(tf.square(delta2), tf.sqrt(1. + tf.square((target - x_vals)/delta2)) - 1.)phuber2_y_out = sess.run(phuber2_y_vals)# Plot the output:x_array = sess.run(x_vals)plt.plot(x_array, l2_y_out, 'b-', label='L2 Loss')plt.plot(x_array, l1_y_out, 'r--', label='L1 Loss')plt.plot(x_array, phuber1_y_out, 'k-.', label='P-Huber Loss (0.25)')plt.plot(x_array, phuber2_y_out, 'g:', label='P-Huber Loss (5.0)')plt.ylim(-0.2, 0.4)plt.legend(loc='lower right', prop={'size': 11})plt.show()###### Categorical Predictions ######x_vals = tf.linspace(-3., 5., 500)target = tf.constant(1.)targets = tf.fill([500,], 1.)# Hinge loss# Use for predicting binary (-1, 1) classes# L = max(0, 1 - (pred * actual))hinge_y_vals = tf.maximum(0., 1. - tf.multiply(target, x_vals))hinge_y_out = sess.run(hinge_y_vals)# Cross entropy loss# L = -actual * (log(pred)) - (1-actual)(log(1-pred))xentropy_y_vals = - tf.multiply(target, tf.log(x_vals)) - tf.multiply((1. - target), tf.log(1. - x_vals))xentropy_y_out = sess.run(xentropy_y_vals)# L = -actual * (log(sigmoid(pred))) - (1-actual)(log(1-sigmoid(pred)))# or# L = max(actual, 0) - actual * pred + log(1 + exp(-abs(actual)))x_val_input = tf.expand_dims(x_vals, 1)target_input = tf.expand_dims(targets, 1)xentropy_sigmoid_y_vals = tf.nn.softmax_cross_entropy_with_logits(logits=x_val_input, labels=target_input)xentropy_sigmoid_y_out = sess.run(xentropy_sigmoid_y_vals)# Weighted (softmax) cross entropy loss# L = -actual * (log(pred)) * weights - (1-actual)(log(1-pred))# or# L = (1 - pred) * actual + (1 + (weights - 1) * pred) * log(1 + exp(-actual))weight = tf.constant(0.5)xentropy_weighted_y_vals = tf.nn.weighted_cross_entropy_with_logits(x_vals, targets, weight)xentropy_weighted_y_out = sess.run(xentropy_weighted_y_vals)# Plot the outputx_array = sess.run(x_vals)plt.plot(x_array, hinge_y_out, 'b-', label='Hinge Loss')plt.plot(x_array, xentropy_y_out, 'r--', label='Cross Entropy Loss')plt.plot(x_array, xentropy_sigmoid_y_out, 'k-.', label='Cross Entropy Sigmoid Loss')plt.plot(x_array, xentropy_weighted_y_out, 'g:', label='Weighted Cross Entropy Loss (x0.5)')plt.ylim(-1.5, 3)#plt.xlim(-1, 3)plt.legend(loc='lower right', prop={'size': 11})plt.show()# Softmax entropy loss# L = -actual * (log(softmax(pred))) - (1-actual)(log(1-softmax(pred)))unscaled_logits = tf.constant([[1., -3., 10.]])target_dist = tf.constant([[0.1, 0.02, 0.88]])softmax_xentropy = tf.nn.softmax_cross_entropy_with_logits(logits=unscaled_logits, labels=target_dist)print(sess.run(softmax_xentropy))# Sparse entropy loss# Use when classes and targets have to be mutually exclusive# L = sum( -actual * log(pred) )unscaled_logits = tf.constant([[1., -3., 10.]])sparse_target_dist = tf.constant([2])sparse_xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=unscaled_logits, labels=sparse_target_dist)print(sess.run(sparse_xentropy))
阅读全文
0 0
- TF/02_TensorFlow_Way/03 Working with Multiple Layers04 Implementing Loss Functions
- TF/02_TensorFlow_Way/05 Implementing Back Propagation
- TF/02_TensorFlow_Way/01 02
- TF/02_TensorFlow_Way/06 Working_with_Batch_and_Stochastic_Training
- TF/02_TensorFlow_Way/07 Combining_Everything_Together
- TF/02_TensorFlow_Way/08 Evaluating_Models
- Functions for Working with Objects matlab
- Building Input Functions with tf.estimator
- TF/05_Nearest_Neighbor_Methods/03 Working with Text Distances04 Computing with Mixed Distance Functi
- Working with multiple data-sources in UITableView
- multiple lua files with same-named functions
- Implementing TF×IDF and PageRank Algorithms with MapReduce and Scala
- CocosBuilder用户手册中文版:5. Working with Multiple Resolutions
- Changes in Agile - lesson on working with multiple teams
- Redis学习笔记II-Working with Multiple Databases
- Redis学习笔记II-Working with Multiple Databases
- 知识库--Working with Multiple Actors No Using Thread Pool (139)
- loss functions for NN
- Google 的软件工程经验总结
- 分布式服务
- Lagom参考指南(三)
- 在构造器中为什么this或super必须放在第一行?
- java内存过高问题定位
- TF/02_TensorFlow_Way/03 Working with Multiple Layers04 Implementing Loss Functions
- KMP算法java实现
- Live555源码解析(2)
- Nginx 单IP下 配置多个server https 启示录
- Unity3d 编辑器 TreeView教程
- AndroidKotLin系列--Android Studio 第一步使用KotLin
- openstack plugin 之(一)怎样写 OpenStack Neutron 的 Plugin
- UIActivityIndicatorView 基本设置
- ORALCE通用函数(NVL,NVL2,NULLIF,COALESCE)