keras实现VGG16 CIFAR10数据集
来源:互联网 发布:电子商务软件教材 编辑:程序博客网 时间:2024/05/16 14:24
import kerasfrom keras.datasets import cifar10from keras.preprocessing.image import ImageDataGeneratorfrom keras.models import Sequentialfrom keras.layers import Dense, Dropout, Activation, Flattenfrom keras.layers import Conv2D, MaxPooling2D, BatchNormalizationfrom keras import optimizersimport numpy as npfrom keras.layers.core import Lambdafrom keras import backend as Kfrom keras.optimizers import SGDfrom keras import regularizers#import data(x_train, y_train), (x_test, y_test) = cifar10.load_data()x_train = x_train.astype('float32')x_test = x_test.astype('float32')y_train = keras.utils.to_categorical(y_train, 10)y_test = keras.utils.to_categorical(y_test, 10)weight_decay = 0.0005nb_epoch=100batch_size=32#layer1 32*32*3model = Sequential()model.add(Conv2D(64, (3, 3), padding='same',input_shape=(32,32,3),kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.3))#layer2 32*32*64model.add(Conv2D(64, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(MaxPooling2D(pool_size=(2, 2)))#layer3 16*16*64model.add(Conv2D(128, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer4 16*16*128model.add(Conv2D(128, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(MaxPooling2D(pool_size=(2, 2)))#layer5 8*8*128model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer6 8*8*256model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer7 8*8*256model.add(Conv2D(256, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(MaxPooling2D(pool_size=(2, 2)))#layer8 4*4*256model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer9 4*4*512model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer10 4*4*512model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(MaxPooling2D(pool_size=(2, 2)))#layer11 2*2*512model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer12 2*2*512model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(Dropout(0.4))#layer13 2*2*512model.add(Conv2D(512, (3, 3), padding='same',kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())model.add(MaxPooling2D(pool_size=(2, 2)))model.add(Dropout(0.5))#layer14 1*1*512model.add(Flatten())model.add(Dense(512,kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())#layer15 512model.add(Dense(512,kernel_regularizer=regularizers.l2(weight_decay)))model.add(Activation('relu'))model.add(BatchNormalization())#layer16 512model.add(Dropout(0.5))model.add(Dense(10))model.add(Activation('softmax'))# 10sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)model.compile(loss='categorical_crossentropy', optimizer=sgd,metrics=['accuracy'])model.fit(x_train,y_train,epochs=nb_epoch, batch_size=batch_size, validation_split=0.1, verbose=1)
阅读全文
0 0
- keras实现VGG16 CIFAR10数据集
- [Keras实战] 构建DenseNet实现Cifar10数据集90%+准确率
- keras -- 实现cifar10分类
- Keras入门课3 -- 使用CNN识别cifar10数据集
- Keras 入门课4 -- 使用ResNet识别cifar10数据集
- keras用vgg16预训练的参数训练自己数据集
- python实现cifar10数据集的可视化
- python实现cifar10数据集的可视化
- 转换Cifar10数据集
- 从keras看VGG16结构图
- vgg16 on keras for tensorflow
- Keras学习之三:用CNN实现cifar10图像分类模型
- 利用TensorFlow实现VGG16
- Keras-数据集介绍
- caffe使用cifar10数据集问题记录
- Caffe for windows 训练cifar10数据集
- 制作cifar10数据集,caffe(三)
- cifar10数据集的读取Python/Tensorflow
- encode string with shortest length(路还很远)
- 逻辑回归--sklearn基本使用
- 作为程序员,我们应不应学好英语?
- Linux 新增开机启动服务
- Django中的数据库模型类-models.py(一对一的关系)
- keras实现VGG16 CIFAR10数据集
- 在map上标记point
- Elasticsearch的PHP的API使用(一)
- 使用js在网页中实现一个计算当年还剩多少时间的倒数计时程序
- 编程能力七段论(腾讯推送)
- wireshark基本用法及过虑规则
- 公式宝典之颜色混合模式公式
- 文章标题
- 用机器学习解决问题的思路