机器学习实验(八):用特征值衰减正则化方法进行深度学习实验_3
来源:互联网 发布:java毕业论文任务书 编辑:程序博客网 时间:2024/05/30 23:04
from __future__ import print_functionfrom keras.datasets import cifar10from keras.preprocessing.image import ImageDataGeneratorfrom keras.models import Sequentialfrom keras.layers.core import Dense, Dropout, Activation, Flattenfrom keras.layers.convolutional import Convolution2D, MaxPooling2Dfrom keras.optimizers import SGDfrom keras.utils import np_utils#Importing Eigenvalue Decay regularizer: #from EigenvalueDecay import EigenvalueRegularizerfrom keras.models import model_from_jsonbatch_size = 32nb_classes = 10nb_epoch = 10data_augmentation = False#True# input image dimensionsimg_rows, img_cols = 32, 32# the CIFAR10 images are RGBimg_channels = 3# the data, shuffled and split between train and test sets(X_train, y_train), (X_test, y_test) = cifar10.load_data()print('X_train shape:', X_train.shape)print(X_train.shape[0], 'train samples')print(X_test.shape[0], 'test samples')# convert class vectors to binary class matricesY_train = np_utils.to_categorical(y_train, nb_classes)Y_test = np_utils.to_categorical(y_test, nb_classes)model = Sequential()model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(img_channels, img_rows, img_cols)))model.add(Activation('relu'))model.add(Convolution2D(32, 3, 3))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.25))model.add(Convolution2D(64, 3, 3, border_mode='same'))model.add(Activation('relu'))model.add(Convolution2D(64, 3, 3))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.25))model.add(Flatten())model.add(Dense(512,W_regularizer=EigenvalueRegularizer(0.1))) #Applying Eigenvalue Decay with C=0.1model.add(Activation('relu'))model.add(Dropout(0.5))model.add(Dense(nb_classes,W_regularizer=EigenvalueRegularizer(0.1))) #Applying Eigenvalue Decay with C=0.1model.add(Activation('softmax'))# let's train the model using SGD + momentum (how original).sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])X_train = X_train.astype('float32')X_test = X_test.astype('float32')X_train /= 255X_test /= 255if not data_augmentation: print('Not using data augmentation.') model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, validation_data=(X_test, Y_test), shuffle=True)else: print('Using real-time data augmentation.') # this will do preprocessing and realtime data augmentation datagen = ImageDataGenerator( featurewise_center=False, # set input mean to 0 over the dataset samplewise_center=False, # set each sample mean to 0 featurewise_std_normalization=False, # divide inputs by std of the dataset samplewise_std_normalization=False, # divide each input by its std zca_whitening=False, # apply ZCA whitening rotation_range=0, # randomly rotate images in the range (degrees, 0 to 180) width_shift_range=0.1, # randomly shift images horizontally (fraction of total width) height_shift_range=0.1, # randomly shift images vertically (fraction of total height) horizontal_flip=True, # randomly flip images vertical_flip=False) # randomly flip images # compute quantities required for featurewise normalization # (std, mean, and principal components if ZCA whitening is applied) datagen.fit(X_train) # fit the model on the batches generated by datagen.flow() model.fit_generator(datagen.flow(X_train, Y_train, batch_size=batch_size), samples_per_epoch=X_train.shape[0], nb_epoch=nb_epoch, validation_data=(X_test, Y_test)) model.save_weights('my_model_weights.h5')print('model weights trained with Eigenvalue decay saved')#********************************** tricking Keras ;-) ***********************************************************#Creating a new model, similar but without Eigenvalue Decay, to use with the weights adjusted with Eigenvalue Decay: #*******************************************************************************************************************model = Sequential()model.add(Convolution2D(32, 3, 3, border_mode='same', input_shape=(img_channels, img_rows, img_cols)))model.add(Activation('relu'))model.add(Convolution2D(32, 3, 3))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.25))model.add(Convolution2D(64, 3, 3, border_mode='same'))model.add(Activation('relu'))model.add(Convolution2D(64, 3, 3))model.add(Activation('relu'))model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.25))model.add(Flatten())model.add(Dense(512))model.add(Activation('relu'))model.add(Dropout(0.5))model.add(Dense(nb_classes))model.add(Activation('softmax'))json_string = model.to_json()open('my_model_struct.json', 'w').write(json_string)print('model structure without Eigenvalue Decay saved')model = model_from_json(open('my_model_struct.json').read())sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])#Loading the weights trained with Eigenvalue Decay:model.load_weights('my_model_weights.h5')#Showing the same results as before: score = model.evaluate(X_test, Y_test, verbose=0)print('Test score of saved model:', score[0])print('Test accuracy of saved model:', score[1])
1 0
- 机器学习实验(八):用特征值衰减正则化方法进行深度学习实验_3
- 机器学习实验(七):用特征值衰减正则化方法进行深度学习实验_2
- 【机器学习实验】用Python进行机器学习实验
- 机器学习实验(三):建立深度学习模型对kaggle保险索赔进行预测
- [机器学习实验4]正则化(引入惩罚因子)
- 神经网络和深度学习的正则化效果检验实验
- 【scikit-learn】用Python进行机器学习实验
- [机器学习] 实验笔记
- 【机器学习实验】使用朴素贝叶斯进行文本的分类
- 【机器学习实验】使用朴素贝叶斯进行文本的分类
- 【机器学习实验】使用朴素贝叶斯进行文本的分类
- 第一个机器学习实验
- 深度学习:正则化方法
- 深度学习:正则化方法
- 机器学习小试(2)使用多层神经网络进行分类实验
- 深度学习深理解(八)- 结构化机器学习项目
- 【机器学习实验】概率编程及贝叶斯方法
- 【机器学习实验】概率编程及贝叶斯方法
- Use of pushglobaltable and setfenv in Lua5.3
- Android 布局优化(merge使用)
- eigen与matlab对应函数列表
- Linux添加虚拟网卡的多种方法
- 类与类之间的关系
- 机器学习实验(八):用特征值衰减正则化方法进行深度学习实验_3
- npm cli commands - npm指令大全
- Candies poj 3159
- 经典前端面试题(三)
- JavaScript import/export
- 例题5-7 丑数 UVa136
- 微信App支付申请及使用过程中的问题
- Python基本数据类型详细介绍
- 关于解决caffe中draw_net无法使用的问题