深度學習 Keras MNIST 正確率可視化 (Cambridge Coding Academy 補充二)

来源:互联网 发布:查看windows系统版本 编辑:程序博客网 时间:2024/06/07 00:57

Cambridge Coding Academy 的 Deep learning for complete beginners: Recognising handwritten digits 是一篇介紹深度學習開發工具 Keras 用在識別手寫數字 MNIST 的文章 (以下簡稱 CCA 原碼), 本系列文章補充說明幾個可視化的部分, 上文說明了 CCA 原碼建構了一個四層的神經網路, 本文討論該神經網路識別手寫數字的正確率。


執行 CCA 原碼會得到一組文字輸出如下:


Train on 54000 samples, validate on 6000 samplesEpoch 1/2054000/54000 [==============================] - 27s - loss: 0.2289 - acc: 0.9329 - val_loss: 0.0871 - val_acc: 0.9745Epoch 2/2054000/54000 [==============================] - 26s - loss: 0.0822 - acc: 0.9748 - val_loss: 0.0864 - val_acc: 0.9747Epoch 3/2054000/54000 [==============================] - 35s - loss: 0.0523 - acc: 0.9829 - val_loss: 0.0652 - val_acc: 0.9812Epoch 4/2054000/54000 [==============================] - 36s - loss: 0.0385 - acc: 0.9876 - val_loss: 0.0678 - val_acc: 0.9800Epoch 5/2054000/54000 [==============================] - 35s - loss: 0.0275 - acc: 0.9909 - val_loss: 0.0845 - val_acc: 0.9772Epoch 6/2054000/54000 [==============================] - 35s - loss: 0.0212 - acc: 0.9931 - val_loss: 0.0672 - val_acc: 0.9842Epoch 7/2054000/54000 [==============================] - 28s - loss: 0.0200 - acc: 0.9931 - val_loss: 0.0831 - val_acc: 0.9785Epoch 8/2054000/54000 [==============================] - 23s - loss: 0.0181 - acc: 0.9940 - val_loss: 0.0898 - val_acc: 0.9803Epoch 9/2054000/54000 [==============================] - 23s - loss: 0.0160 - acc: 0.9945 - val_loss: 0.0788 - val_acc: 0.9807Epoch 10/2054000/54000 [==============================] - 23s - loss: 0.0121 - acc: 0.9961 - val_loss: 0.0863 - val_acc: 0.9790Epoch 11/2054000/54000 [==============================] - 24s - loss: 0.0126 - acc: 0.9957 - val_loss: 0.0756 - val_acc: 0.9852Epoch 12/2054000/54000 [==============================] - 24s - loss: 0.0110 - acc: 0.9964 - val_loss: 0.0846 - val_acc: 0.9813Epoch 13/2054000/54000 [==============================] - 24s - loss: 0.0135 - acc: 0.9955 - val_loss: 0.0860 - val_acc: 0.9810Epoch 14/2054000/54000 [==============================] - 24s - loss: 0.0066 - acc: 0.9976 - val_loss: 0.0862 - val_acc: 0.9817Epoch 15/2054000/54000 [==============================] - 24s - loss: 0.0073 - acc: 0.9977 - val_loss: 0.0907 - val_acc: 0.9817Epoch 16/2054000/54000 [==============================] - 24s - loss: 0.0123 - acc: 0.9959 - val_loss: 0.0904 - val_acc: 0.9810Epoch 17/2054000/54000 [==============================] - 24s - loss: 0.0098 - acc: 0.9971 - val_loss: 0.0861 - val_acc: 0.9815Epoch 18/2054000/54000 [==============================] - 25s - loss: 0.0088 - acc: 0.9969 - val_loss: 0.0962 - val_acc: 0.9827Epoch 19/2054000/54000 [==============================] - 25s - loss: 0.0079 - acc: 0.9977 - val_loss: 0.0847 - val_acc: 0.9840Epoch 20/2054000/54000 [==============================] - 25s - loss: 0.0024 - acc: 0.9993 - val_loss: 0.0814 - val_acc: 0.9833

CCA 原碼用 20 個迭代 (epoch) 訓練所建構的四層神經網路, 每個迭代使用 54000 張圖片做訓練、6000張圖片做驗證, 20 個迭代訓練結束後可以看到正確率可以達到 99.93/98.33。CCA 原碼沒有圖示正確率及誤差在 20 個迭代中的變化, 如果加上以下代碼就可以繪製學習過程的正確率及誤差:


log = model.fit(X_train, Y_train, # Train the model using the training set...          batch_size=batch_size, nb_epoch=num_epochs,          verbose=1, validation_split=0.1) # ...holding out 10% of the data for validationplt.figure(facecolor='white')plt.subplot(2, 1, 1)plt.plot(log.history['acc'],'b-',label='Training Accuracy')plt.plot(log.history['val_acc'],'r-',label='Validation Accuracy')plt.legend(loc='best')plt.xlabel('Epochs')plt.axis([0, num_epochs, 0.9, 1])plt.subplot(2, 1, 2)plt.plot(log.history['loss'],'b-',label='Training Loss')plt.plot(log.history['val_loss'],'r-',label='Validation Loss')plt.legend(loc='best')plt.xlabel('Epochs')plt.axis([0, num_epochs, 0, 1])plt.show()

  • 第 1 行 取得訓練紀錄 (log)
  • 第 7-12 行 繪製正確率 (accuracy)
  • 第 14-19 行 繪製誤差 (loss)

繪製的結果如下圖:



正確率 99.93/98.33 似乎比 CAA 原碼還好一丟丟 ?! 原因可能是每次訓練都會出現一些誤差 ?! 存疑 ... 下文我們再研究那些手寫數字判斷錯誤, 為什麼會判斷錯誤。



0 0
原创粉丝点击