TF/06_Neural_Networks/04_Single_Hidden_Layer_Network

来源:互联网 发布:ubuntu切换到windows 编辑:程序博客网 时间:2024/06/16 00:42

Implementing a One Layer Neural Network

We will use the Iris data for this exercise. We will build a one-hidden layer fully connected neural network to predict one of the flower attributes from the other three.

The four flower attributes are (1) sepal length, (2) sepal width, (3) pedal length, and (4) pedal width. We will use (1-3) to predict (4). The main purpose of this section is to illustrate how neural networks can implement regression just as easily as classification. Later on in this chapter, we will extend this model to have multiple hidden layers.

Model

The model will have one hidden layer. If the hidden layer has 10 nodes, then the model will look like the following:

One Hidden Layer Network

We will use the ReLU activation functions.

For the loss function, we will use the average MSE across the batch.

Graph of Loss Function (Average Batch MSE)

Running the script should result in a similar loss.

Batch MSE

# Implementing a one-layer Neural Network#---------------------------------------## We will illustrate how to create a one hidden layer NN## We will use the iris data for this exercise## We will build a one-hidden layer neural network#  to predict the fourth attribute, Petal Width from#  the other three (Sepal length, Sepal width, Petal length).import matplotlib.pyplot as pltimport numpy as npimport tensorflow as tffrom sklearn import datasetsfrom tensorflow.python.framework import opsops.reset_default_graph()iris = datasets.load_iris()x_vals = np.array([x[0:3] for x in iris.data])y_vals = np.array([x[3] for x in iris.data])# Create graph session #修改位置config = tf.ConfigProto(allow_soft_placement= True, log_device_placement= True)sess = tf.Session(config= config)#sess = tf.Session()# make results reproducibleseed = 2tf.set_random_seed(seed)np.random.seed(seed)  # Split data into train/test = 80%/20%train_indices = np.random.choice(len(x_vals), round(len(x_vals)*0.8), replace=False)test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))x_vals_train = x_vals[train_indices]x_vals_test = x_vals[test_indices]y_vals_train = y_vals[train_indices]y_vals_test = y_vals[test_indices]# Normalize by column (min-max norm)def normalize_cols(m):    col_max = m.max(axis=0)    col_min = m.min(axis=0)    return (m-col_min) / (col_max - col_min)x_vals_train = np.nan_to_num(normalize_cols(x_vals_train))x_vals_test = np.nan_to_num(normalize_cols(x_vals_test))# Declare batch sizebatch_size = 50# Initialize placeholdersx_data = tf.placeholder(shape=[None, 3], dtype=tf.float32)y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)# Create variables for both NN layershidden_layer_nodes = 10A1 = tf.Variable(tf.random_normal(shape=[3,hidden_layer_nodes])) # inputs -> hidden nodesb1 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes]))   # one biases for each hidden nodeA2 = tf.Variable(tf.random_normal(shape=[hidden_layer_nodes,1])) # hidden inputs -> 1 outputb2 = tf.Variable(tf.random_normal(shape=[1]))   # 1 bias for the output# Declare model operationshidden_output = tf.nn.relu(tf.add(tf.matmul(x_data, A1), b1))final_output = tf.nn.relu(tf.add(tf.matmul(hidden_output, A2), b2))# Declare loss function (MSE)loss = tf.reduce_mean(tf.square(y_target - final_output))# Declare optimizermy_opt = tf.train.GradientDescentOptimizer(0.005)train_step = my_opt.minimize(loss)# Initialize variablesinit = tf.global_variables_initializer()sess.run(init)# Training looploss_vec = []test_loss = []for i in range(500):    rand_index = np.random.choice(len(x_vals_train), size=batch_size)    rand_x = x_vals_train[rand_index]    rand_y = np.transpose([y_vals_train[rand_index]])    sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})    temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})    loss_vec.append(np.sqrt(temp_loss))    test_temp_loss = sess.run(loss, feed_dict={x_data: x_vals_test, y_target: np.transpose([y_vals_test])})    test_loss.append(np.sqrt(test_temp_loss))    if (i+1)%50==0:        print('Generation: ' + str(i+1) + '. Loss = ' + str(temp_loss))# Plot loss (MSE) over timeplt.plot(loss_vec, 'k-', label='Train Loss')plt.plot(test_loss, 'r--', label='Test Loss')plt.title('Loss (MSE) per Generation')plt.legend(loc='upper right')plt.xlabel('Generation')plt.ylabel('Loss')plt.show()
Generation: 50. Loss = 0.527902Generation: 100. Loss = 0.228715Generation: 150. Loss = 0.179773Generation: 200. Loss = 0.107899Generation: 250. Loss = 0.240029Generation: 300. Loss = 0.15324Generation: 350. Loss = 0.165901Generation: 400. Loss = 0.0957248Generation: 450. Loss = 0.121014Generation: 500. Loss = 0.129494
原创粉丝点击