机器学习实验(七):用特征值衰减正则化方法进行深度学习实验_2

来源:互联网 发布:无人机pos数据 编辑:程序博客网 时间:2024/06/12 18:09


声明:版权所有,转载请联系作者并注明出处  http://blog.csdn.net/u013719780?viewmode=contents


本文在机器学习实验(六):用特征值衰减正则化方法进行深度学习实验_1的基础上进行实验,本文以mnist数据集进行实验,相关实验代码如下,本文的实验是用keras写的,keras代码很容易看懂,废话不多说,直接上代码。




#This example belongs to the Keras repository and was slightly modified  #to apply Eigenvalue Decay on two of the dense layers of the MLP modelfrom __future__ import print_functionimport numpy as npnp.random.seed(1337)  # for reproducibilityfrom keras.datasets import mnistfrom keras.models import Sequentialfrom keras.layers.core import Dense, Dropout, Activationfrom keras.optimizers import SGD, Adam, RMSpropfrom keras.utils import np_utils#Importing Eigenvalue Decay regularizer: #from EigenvalueDecay import EigenvalueRegularizerfrom keras.models import model_from_jsonbatch_size = 128nb_classes = 10nb_epoch = 36# the data, shuffled and split between train and test sets(X_train, y_train), (X_test, y_test) = mnist.load_data()X_train = X_train.reshape(60000, 784)X_test = X_test.reshape(10000, 784)X_train = X_train.astype('float32')X_test = X_test.astype('float32')X_train /= 255X_test /= 255print(X_train.shape[0], 'train samples')print(X_test.shape[0], 'test samples')# convert class vectors to binary class matricesY_train = np_utils.to_categorical(y_train, nb_classes)Y_test = np_utils.to_categorical(y_test, nb_classes)model = Sequential()model.add(Dense(515, input_shape=(784,),W_regularizer=EigenvalueRegularizer(0.001))) #Applying Eigenvalue Decay with C=0.001model.add(Activation('relu'))model.add(Dropout(0.2))model.add(Dense(515,W_regularizer=EigenvalueRegularizer(0.01))) #Applying Eigenvalue Decay with C=0.01model.add(Activation('relu'))model.add(Dropout(0.2))model.add(Dense(10,W_regularizer=EigenvalueRegularizer(0.001))) #Applying Eigenvalue Decay with C=0.001model.add(Activation('softmax'))model.compile(loss='categorical_crossentropy',              optimizer=RMSprop(),              metrics=['accuracy'])history = model.fit(X_train, Y_train,                    batch_size=batch_size, nb_epoch=nb_epoch,                    verbose=1, validation_data=(X_test, Y_test))score = model.evaluate(X_test, Y_test, verbose=0)print('Test score:', score[0])print('Test accuracy:', score[1])model.save_weights('my_model_weights.h5')print('model weights trained with Eigenvalue decay saved')#**********************************  tricking Keras ;-)  ***********************************************************#Creating a new model, similar but without Eigenvalue Decay, to use with the weights adjusted with Eigenvalue Decay: #*******************************************************************************************************************model = Sequential()model.add(Dense(515, input_shape=(784,)))model.add(Activation('relu'))model.add(Dropout(0.2))model.add(Dense(515))model.add(Activation('relu'))model.add(Dropout(0.2))model.add(Dense(10))model.add(Activation('softmax'))json_string = model.to_json()open('my_model_struct.json', 'w').write(json_string)print('model structure without Eigenvalue Decay saved')model = model_from_json(open('my_model_struct.json').read())model.compile(loss='categorical_crossentropy',              optimizer=RMSprop(),              metrics=['accuracy'])#Loading the weights trained with Eigenvalue Decay:model.load_weights('my_model_weights.h5')#Showing the same results as before: score = model.evaluate(X_test, Y_test, verbose=0)print('Test score of saved model:', score[0])print('Test accuracy of saved model:', score[1])
60000 train samples10000 test samplesTrain on 60000 samples, validate on 10000 samplesEpoch 1/3660000/60000 [==============================] - 42s - loss: 0.2854 - acc: 0.9225 - val_loss: 0.1207 - val_acc: 0.9615Epoch 2/3660000/60000 [==============================] - 44s - loss: 0.1472 - acc: 0.9676 - val_loss: 0.0990 - val_acc: 0.9695Epoch 3/3660000/60000 [==============================] - 40s - loss: 0.1178 - acc: 0.9762 - val_loss: 0.0827 - val_acc: 0.9734Epoch 4/3660000/60000 [==============================] - 39s - loss: 0.1026 - acc: 0.9805 - val_loss: 0.0698 - val_acc: 0.9779Epoch 5/3660000/60000 [==============================] - 40s - loss: 0.0913 - acc: 0.9830 - val_loss: 0.0641 - val_acc: 0.9819Epoch 6/3660000/60000 [==============================] - 45s - loss: 0.0843 - acc: 0.9852 - val_loss: 0.0612 - val_acc: 0.9820Epoch 7/3660000/60000 [==============================] - 40s - loss: 0.0766 - acc: 0.9874 - val_loss: 0.0631 - val_acc: 0.9816Epoch 8/3660000/60000 [==============================] - 39s - loss: 0.0724 - acc: 0.9884 - val_loss: 0.0617 - val_acc: 0.9828Epoch 9/3660000/60000 [==============================] - 39s - loss: 0.0696 - acc: 0.9892 - val_loss: 0.0655 - val_acc: 0.9817Epoch 10/3660000/60000 [==============================] - 39s - loss: 0.0652 - acc: 0.9902 - val_loss: 0.0701 - val_acc: 0.9815Epoch 11/3660000/60000 [==============================] - 39s - loss: 0.0627 - acc: 0.9910 - val_loss: 0.0578 - val_acc: 0.9845Epoch 12/3660000/60000 [==============================] - 44s - loss: 0.0601 - acc: 0.9913 - val_loss: 0.0572 - val_acc: 0.9841Epoch 13/3660000/60000 [==============================] - 47s - loss: 0.0586 - acc: 0.9920 - val_loss: 0.0615 - val_acc: 0.9834Epoch 14/3660000/60000 [==============================] - 53s - loss: 0.0551 - acc: 0.9927 - val_loss: 0.0659 - val_acc: 0.9817Epoch 15/3660000/60000 [==============================] - 45s - loss: 0.0547 - acc: 0.9927 - val_loss: 0.0590 - val_acc: 0.9849Epoch 16/3660000/60000 [==============================] - 45s - loss: 0.0533 - acc: 0.9935 - val_loss: 0.0616 - val_acc: 0.9842Epoch 17/3660000/60000 [==============================] - 43s - loss: 0.0507 - acc: 0.9936 - val_loss: 0.0715 - val_acc: 0.9802Epoch 18/3660000/60000 [==============================] - 51s - loss: 0.0508 - acc: 0.9938 - val_loss: 0.0633 - val_acc: 0.9851Epoch 19/3660000/60000 [==============================] - 55s - loss: 0.0497 - acc: 0.9942 - val_loss: 0.0656 - val_acc: 0.9848Epoch 20/3660000/60000 [==============================] - 50s - loss: 0.0498 - acc: 0.9940 - val_loss: 0.0666 - val_acc: 0.9838Epoch 21/3660000/60000 [==============================] - 48s - loss: 0.0494 - acc: 0.9941 - val_loss: 0.0627 - val_acc: 0.9841Epoch 22/3660000/60000 [==============================] - 44s - loss: 0.0469 - acc: 0.9947 - val_loss: 0.0711 - val_acc: 0.9826Epoch 23/3660000/60000 [==============================] - 54s - loss: 0.0469 - acc: 0.9950 - val_loss: 0.0622 - val_acc: 0.9843Epoch 24/3660000/60000 [==============================] - 56s - loss: 0.0457 - acc: 0.9950 - val_loss: 0.0683 - val_acc: 0.9844Epoch 25/3660000/60000 [==============================] - 55s - loss: 0.0450 - acc: 0.9950 - val_loss: 0.0716 - val_acc: 0.9831Epoch 26/3660000/60000 [==============================] - 49s - loss: 0.0448 - acc: 0.9952 - val_loss: 0.0683 - val_acc: 0.9833Epoch 27/3660000/60000 [==============================] - 39s - loss: 0.0450 - acc: 0.9952 - val_loss: 0.0667 - val_acc: 0.9848Epoch 28/3660000/60000 [==============================] - 38s - loss: 0.0450 - acc: 0.9948 - val_loss: 0.0660 - val_acc: 0.9849Epoch 29/3660000/60000 [==============================] - 38s - loss: 0.0427 - acc: 0.9957 - val_loss: 0.0611 - val_acc: 0.9854Epoch 30/3660000/60000 [==============================] - 38s - loss: 0.0431 - acc: 0.9957 - val_loss: 0.0653 - val_acc: 0.9843Epoch 31/3660000/60000 [==============================] - 39s - loss: 0.0421 - acc: 0.9958 - val_loss: 0.0604 - val_acc: 0.9854Epoch 32/3660000/60000 [==============================] - 38s - loss: 0.0416 - acc: 0.9960 - val_loss: 0.0651 - val_acc: 0.9854Epoch 33/3660000/60000 [==============================] - 38s - loss: 0.0423 - acc: 0.9960 - val_loss: 0.0874 - val_acc: 0.9805Epoch 34/3660000/60000 [==============================] - 38s - loss: 0.0426 - acc: 0.9958 - val_loss: 0.0747 - val_acc: 0.9827Epoch 35/3660000/60000 [==============================] - 39s - loss: 0.0428 - acc: 0.9956 - val_loss: 0.0669 - val_acc: 0.9842Epoch 36/3660000/60000 [==============================] - 52s - loss: 0.0412 - acc: 0.9962 - val_loss: 0.0735 - val_acc: 0.9845Test score: 0.0734886097288Test accuracy: 0.9845model weights trained with Eigenvalue decay savedmodel structure without Eigenvalue Decay savedTest score of saved model: 0.0734886097288Test accuracy of saved model: 0.9845

1 0
原创粉丝点击