利用Keras实现多层感知器(MLP)模型和卷积神经网络(CNN)模型并对手写数字图像分类

来源:互联网 发布:淘宝直通车入门知识 编辑:程序博客网 时间:2024/06/05 06:27

利用Keras实现多层感知器(MLP)模型和卷积神经网络(CNN)模型并对手写数字图像分类

闲来无事,利用Keras实现MLP和CNN模型并对手写数字图像分类,测试数据是大量手写的0-9数字图像,尺寸为:28x28(784)像素,来源于鼎鼎有名的Modified National Institute of Standards and Technology (MNIST)。数据获取很方便,只要保持互联网畅通,使用Keras自带的API:mnist.load_data()就可在第一次调用时自动下载至当前用户的~/.keras/datasets文件夹下(例如当前用户为:davidhopper,则在Windows系统中下载路径为:C:\Users\davidhopper.keras\datasets,Linux系统中的下载路径为:/home/davidhopper/.keras/datasets)。
文中所有源代码均来自“Develop Deep Learning Models on Theano and TensorFlow Using Keras”(Jason Brownlee)一书,我对其稍作修改,以使其更加适用于Keras 2 API。

1. MLP模型

MLP模型采用经典的“输入层-中间层(隐藏层)-输出层”结构,输入层单元数为28X28=784,输出层数为:10(0-9数字的标识号),流程图如下:

Created with Raphaël 2.1.0开始输入层(Input Layer, 784个输入)隐藏层(Hidden Layer,784个神经元)输出层(Output Layer,10个输出)结束

实现代码如下:

# baseline MLP for mnist datasetimport numpyfrom keras.datasets import mnistfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Dropoutfrom keras.utils import np_utils# fix random seed for reproducibilityseed = 7numpy.random.seed(seed)# load data(X_train, y_train), (X_test, y_test) = mnist.load_data()# flatten 28*28 images to a 784 vector for each imagenum_pixels = X_train.shape[1] * X_train.shape[2] X_train = X_train.reshape(X_train.shape[0], num_pixels).astype("float32")X_test = X_test.reshape(X_test.shape[0], num_pixels).astype("float32")# normalize inputs from 0-255 to 0-1X_train = X_train / 255X_test = X_test / 255# one hot encode outputsy_train = np_utils.to_categorical(y_train)y_test = np_utils.to_categorical(y_test)num_classes = y_test.shape[1]# define baseline modeldef baseline_model():    # create model    model = Sequential()    model.add(Dense(num_pixels, input_dim = num_pixels, kernel_initializer = "normal", activation = "relu"))    model.add(Dense(num_classes, kernel_initializer = "normal", activation = "softmax"))    # compile model    model.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"])    return model# build the modelmodel = baseline_model()# fit the modelmodel.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 10, batch_size = 200, verbose = 2)# final evaluation of the modelscores = model.evaluate(X_test, y_test, verbose = 0)print("Baseline Error: %.2f%%" % (100 - scores[1] * 100))

我使用普通的GeForce GT 740显卡加速,每轮(epoch)训练耗时约为3-5s,错误率为:1.79%。

(C:\Users\Administrator\Anaconda3) d:\Python\code\mlp>python mlp_for_mnist.pyUsing TensorFlow backend.Train on 60000 samples, validate on 10000 samplesEpoch 1/102017-12-16 10:15:04.752675: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:name: GeForce GT 740 major: 3 minor: 0 memoryClockRate(GHz): 1.0585pciBusID: 0000:01:00.0totalMemory: 1.00GiB freeMemory: 834.86MiB2017-12-16 10:15:04.752806: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GT 740, pci bus id: 0000:01:00.0, compute capability: 3.0) - 5s - loss: 0.2811 - acc: 0.9206 - val_loss: 0.1412 - val_acc: 0.9575Epoch 2/10 - 3s - loss: 0.1116 - acc: 0.9680 - val_loss: 0.0919 - val_acc: 0.9709Epoch 3/10 - 3s - loss: 0.0714 - acc: 0.9798 - val_loss: 0.0786 - val_acc: 0.9776Epoch 4/10 - 3s - loss: 0.0503 - acc: 0.9857 - val_loss: 0.0743 - val_acc: 0.9770Epoch 5/10 - 3s - loss: 0.0371 - acc: 0.9892 - val_loss: 0.0685 - val_acc: 0.9789Epoch 6/10 - 4s - loss: 0.0268 - acc: 0.9927 - val_loss: 0.0631 - val_acc: 0.9798Epoch 7/10 - 3s - loss: 0.0205 - acc: 0.9947 - val_loss: 0.0624 - val_acc: 0.9808Epoch 8/10 - 3s - loss: 0.0141 - acc: 0.9969 - val_loss: 0.0618 - val_acc: 0.9797Epoch 9/10 - 3s - loss: 0.0107 - acc: 0.9978 - val_loss: 0.0583 - val_acc: 0.9818Epoch 10/10 - 3s - loss: 0.0082 - acc: 0.9984 - val_loss: 0.0581 - val_acc: 0.9821Baseline Error: 1.79%

2. 稍简单的CNN模型

首先实现稍简单的CNN模型,采用“输入层-卷积层-最大汇聚层-丢弃层-扁平层-隐藏层-输出层”结构,输入层采用三维向量:1x28X28,输出层为10个输出(0-9数字的标识号),流程图如下:

Created with Raphaël 2.1.0开始输入层(Input Layer, 1x28x28)卷积层(Convolutional Layer, 32映射,卷积核:5x5)最大汇聚层(Max Pooling Layer, 2x2)丢弃层(丢弃20%)扁平层(将矩阵转换为向量)隐藏层(128个神经元)输出层(10个输出)结束

实现代码如下:

# Simple CNNfor the MNIST Datasetimport numpyfrom keras.datasets import mnistfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Dropoutfrom keras.layers import Flattenfrom keras.layers.convolutional import Convolution2Dfrom keras.layers.convolutional import MaxPooling2Dfrom keras.utils import np_utilsfrom keras import backend as KK.set_image_dim_ordering("th")# fix randoom seed for reproducibilityseed = 7numpy.random.seed(seed)# load data(X_train, y_train), (X_test, y_test) = mnist.load_data()# reshape to be [samples] [channels] [width] [height]X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype("float32")X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype("float32")# normalize inputs from 0-255 to 0-1X_train = X_train / 255X_test = X_test / 255# one hot encode outputsy_train = np_utils.to_categorical(y_train)y_test = np_utils.to_categorical(y_test)num_classes = y_test.shape[1]# define a simple CNN modeldef simple_cnn_model():    # create the model    model = Sequential()    model.add(Convolution2D(32, (5, 5), input_shape = (1, 28, 28), activation = "relu", padding = "valid"))    model.add(MaxPooling2D(pool_size = (2, 2)))    model.add(Dropout(0.2))    model.add(Flatten())    model.add(Dense(128, activation = "relu"))    model.add(Dense(num_classes, activation = "softmax"))    # compile the model    model.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"])    return model# build the modelmodel = simple_cnn_model()# fit the model model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 10, batch_size = 200, verbose = 2)# evaluate the modelscores = model.evaluate(X_test, y_test, verbose = 0)print("CNN error: %.2f%%" % (100 - scores[1] * 100))

我使用普通的GeForce GT 740显卡加速,每轮(epoch)训练耗时约为11-13s,错误率为:1.04%。

(C:\Users\Administrator\Anaconda3) d:\Python\code\mlp>python simple_cnn_for_mnist.pyUsing TensorFlow backend.2017-12-16 08:43:05.851596: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:name: GeForce GT 740 major: 3 minor: 0 memoryClockRate(GHz): 1.0585pciBusID: 0000:01:00.0totalMemory: 1.00GiB freeMemory: 834.86MiB2017-12-16 08:43:05.852300: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GT 740, pci bus id: 0000:01:00.0, compute capability: 3.0)Train on 60000 samples, validate on 10000 samplesEpoch 1/10 - 13s - loss: 0.2340 - acc: 0.9342 - val_loss: 0.0818 - val_acc: 0.9742Epoch 2/10 - 13s - loss: 0.0734 - acc: 0.9782 - val_loss: 0.0468 - val_acc: 0.9843Epoch 3/10 - 12s - loss: 0.0533 - acc: 0.9837 - val_loss: 0.0434 - val_acc: 0.9859Epoch 4/10 - 12s - loss: 0.0405 - acc: 0.9876 - val_loss: 0.0406 - val_acc: 0.9866Epoch 5/10 - 13s - loss: 0.0338 - acc: 0.9892 - val_loss: 0.0341 - val_acc: 0.9881Epoch 6/10 - 11s - loss: 0.0278 - acc: 0.9912 - val_loss: 0.0325 - val_acc: 0.9892Epoch 7/10 - 13s - loss: 0.0236 - acc: 0.9926 - val_loss: 0.0359 - val_acc: 0.9884Epoch 8/10 - 12s - loss: 0.0207 - acc: 0.9938 - val_loss: 0.0336 - val_acc: 0.9885Epoch 9/10 - 12s - loss: 0.0170 - acc: 0.9946 - val_loss: 0.0308 - val_acc: 0.9896Epoch 10/10 - 12s - loss: 0.0144 - acc: 0.9957 - val_loss: 0.0333 - val_acc: 0.9896CNN error: 1.04%

3. 更复杂的CNN模型

接下来实现更复杂的CNN模型,采用“输入层-卷积层-最大汇聚层-卷积层-最大汇聚层-丢弃层-扁平层-隐藏层-隐藏层-输出层”结构,输入层采用三维向量:1x28X28,输出层为10个输出(0-9数字的标识号),流程图如下:

Created with Raphaël 2.1.0开始输入层(Input Layer, 1x28x28)卷积层(Convolutional Layer, 30映射,卷积核:5x5)最大汇聚层(Max Pooling Layer, 2x2)卷积层(Convolutional Layer, 15映射,卷积核:3x3)最大汇聚层(Max Pooling Layer, 2x2)丢弃层(Dropout Layer,丢弃20%)扁平层(Flatten Layer,将矩阵转换为向量)隐藏层(Hidden Layer,128个神经元)隐藏层(Hidden Layer,50个神经元)输出层(10个输出)结束

实现代码如下:

# Large CNN for the MNIST Datasetimport numpyfrom keras.datasets import mnistfrom keras.models import Sequentialfrom keras.layers import Densefrom keras.layers import Dropoutfrom keras.layers import Flattenfrom keras.layers.convolutional import Convolution2Dfrom keras.layers.convolutional import MaxPooling2Dfrom keras.utils import np_utilsfrom keras import backend as KK.set_image_dim_ordering("th")# fix random seed fro reproducibilityseed = 7numpy.random.seed(seed)# load data(X_train, y_train), (X_test, y_test) = mnist.load_data()# reshape to be [samples] [channels] [width] [height]X_train = X_train.reshape(X_train.shape[0], 1, 28, 28).astype("float32")X_test = X_test.reshape(X_test.shape[0], 1, 28, 28).astype("float32")# normalize inputs from 0-255 to 0-1X_train = X_train / 255X_test = X_test / 255# one hot encode outputsy_train = np_utils.to_categorical(y_train)y_test = np_utils.to_categorical(y_test)num_classes = y_test.shape[1]# define the large CNN modeldef large_cnn_model():    # create the model    model = Sequential()    model.add(Convolution2D(30, (5, 5), input_shape = (1, 28, 28), activation = "relu", padding = "valid"))    model.add(MaxPooling2D(pool_size = (2, 2)))    model.add(Convolution2D(15, (3, 3), activation = "relu"))    model.add(MaxPooling2D(pool_size = (2, 2)))    model.add(Dropout(0.2))    model.add(Flatten())    model.add(Dense(128, activation = "relu"))    model.add(Dense(50, activation = "relu"))    model.add(Dense(num_classes, activation = "softmax"))    # Compile the model    model.compile(loss = "categorical_crossentropy", optimizer = "adam", metrics = ["accuracy"])    return model# build the modelmodel = large_cnn_model()# fit the modelmodel.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 10, batch_size = 200, verbose = 2)# evalute the model scores = model.evaluate(X_test, y_test, verbose = 0) print("Large CNN Error: %.2f%%" % (100 - scores[1] * 100))

我使用普通的GeForce GT 740显卡加速,每轮(epoch)训练耗时约为11-13s,错误率为:0.80%。

(C:\Users\Administrator\Anaconda3) d:\Python\code\mlp>python large_cnn_for_mnist.pyUsing TensorFlow backend.2017-12-16 09:09:36.676635: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1030] Found device 0 with properties:name: GeForce GT 740 major: 3 minor: 0 memoryClockRate(GHz): 1.0585pciBusID: 0000:01:00.0totalMemory: 1.00GiB freeMemory: 834.86MiB2017-12-16 09:09:36.676757: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\gpu\gpu_device.cc:1120] Creating TensorFlow device (/device:GPU:0) -> (device: 0, name: GeForce GT 740, pci bus id: 0000:01:00.0, compute capability: 3.0)Train on 60000 samples, validate on 10000 samplesEpoch 1/10 - 13s - loss: 0.3966 - acc: 0.8776 - val_loss: 0.1004 - val_acc: 0.9681Epoch 2/10 - 11s - loss: 0.0941 - acc: 0.9706 - val_loss: 0.0593 - val_acc: 0.9811Epoch 3/10 - 12s - loss: 0.0688 - acc: 0.9786 - val_loss: 0.0381 - val_acc: 0.9884Epoch 4/10 - 10s - loss: 0.0564 - acc: 0.9821 - val_loss: 0.0333 - val_acc: 0.9885Epoch 5/10 - 10s - loss: 0.0477 - acc: 0.9851 - val_loss: 0.0294 - val_acc: 0.9906Epoch 6/10 - 12s - loss: 0.0426 - acc: 0.9860 - val_loss: 0.0278 - val_acc: 0.9907Epoch 7/10 - 12s - loss: 0.0375 - acc: 0.9883 - val_loss: 0.0253 - val_acc: 0.9918Epoch 8/10 - 12s - loss: 0.0336 - acc: 0.9896 - val_loss: 0.0247 - val_acc: 0.9918Epoch 9/10 - 12s - loss: 0.0314 - acc: 0.9902 - val_loss: 0.0227 - val_acc: 0.9928Epoch 10/10 - 12s - loss: 0.0271 - acc: 0.9913 - val_loss: 0.0243 - val_acc: 0.9920Large CNN Error: 0.80%

4. 实现细节

对于一段较长的python代码,直接在交互式窗口中编写肯定不合适,我们需要借助一个文本编辑器,编写生成python源文件(如对于MLP模型,我命名为:mlp_for_mnist.py)。注意:在Windows系统中一定不要使用记事本或写字板编写源文件,因为这两个二货根本不会输出正确的UTF-8编码!我个人强烈推荐:Sublime Text: http://www.sublimetext.com/(付费软件,但不注册最多也就是偶尔弹出购买对话框,不影响使用),该编辑器对代码的高亮显示和提示都做得不错;其次推荐:notepad++: https://notepad-plus-plus.org/,提示稍差一些,不过免费,也挺好用。
源文件编写完毕后,点击“开始”菜单,打开“Anaconda3(64-bit)–>Anaconda Prompt”窗口,切换到源文件所在的文件夹,输入“python mlp_for_mnist.py”命令,就可以执行代码了。如果执行中报错,则修改、保存源文件后,继续输入“python mlp_for_mnist.py”命令执行:

(C:\Users\Administrator\Anaconda3) C:\Users\Administrator>cd /D d:\Python\code\mlp(C:\Users\Administrator\Anaconda3) d:\Python\code\mlp>python mlp_for_mnist.py
阅读全文
0 0
原创粉丝点击