CNN基础(1)
来源:互联网 发布:如何在知乎发表文章 编辑:程序博客网 时间:2024/06/08 17:39
classification = score function + loss function +optimization
score function
- linear
- nonlinear
An example of mapping an image to class scores. For the sake of visualization, we assume the image only has 4 pixels (4 monochrome pixels, we are not considering color channels in this example for brevity), and that we have 3 classes (red (cat), green (dog), blue (ship) class). (Clarification: in particular, the colors here simply indicate 3 classes and are not related to the RGB channels.) We stretch the image pixels into a column and perform matrix multiplication to get the scores for each class. Note that this particular set of weights W is not good at all: the weights assign our cat image a very low cat score. In particular, this set of weights seems convinced that it’s looking at a dog.
loss function (cost function, objective)
Multicass Support Vector Machine(SVM) loss
Li=∑j≠yimax(0,sj−syi+Δ) (hinge loss) squared hinge loss
cross-entropy loss
Li=−log⎛⎝⎜⎜efyi∑jefj⎞⎠⎟⎟or equivalentlyLi=−fyi+log∑jefj
wherefj(z)=ezj∑kezk
is called softmax function
the loss function quantifies our unhappiness with predictions on the training set
The Multiclass Support Vector Machine “wants” the score of the correct class to be higher than all other scores by at least a margin of delta. If any class has a score inside the red region (or higher), then there will be accumulated loss. Otherwise the loss will be zero. Our objective will be to find the weights that will simultaneously satisfy this constraint for all examples in the training data and give a total loss that is as low as possible.
- Regularization
- L1 Regularization
R(W)=∑|W| - L2
R(W)=∑k∑lW2k,l - Drop out
- L1 Regularization
the full loss becomes:
optimization
Gradient Descent
Stochastic Gradient Descent (SGD or on-line gradient descent)
Mini-batch Gradient Descent
backpropagation
An example circuit demonstrating the intuition behind the operations that backpropagation performs during the backward pass in order to compute the gradients on the inputs. Sum operation distributes gradients equally to all its inputs. Max operation routes the gradient to the higher input. Multiply gate takes the input activations, swaps them and multiplies by its gradient.
- CNN基础(1)
- CNN基础
- CNN基础
- 深度学习基础1-CNN的内容
- CNN基础(2)
- CNN基础(3)
- 零基础学CNN
- CNN卷积神经网络基础
- 卷积神经网络CNN-基础
- 卷积神经网络(CNN)学习笔记1:基础入门
- 卷积神经网络(CNN)学习笔记1:基础入门
- 卷积神经网络(CNN)学习笔记1:基础入门
- 卷积神经网络(CNN)学习笔记1:基础入门
- 卷积神经网络(CNN)基础介绍
- 卷积神经网络(CNN)基础介绍
- 卷积神经网络(CNN)基础介绍
- 卷积神经网络(CNN)基础介绍
- [深度学习之CNN] 卷积神经网络(CNN)基础介绍
- 2016网易招聘(路灯)
- Spring子项目了解
- burtsuite抓不到本地的包解决方案(localhost)火狐浏览器
- Java 面向对象学习笔记
- 新建并运行第一个Vue项目
- CNN基础(1)
- LeetCode 1. Two Sum
- Android基础 平面数据与父子关系的数据转换
- 开源机器学习网址大全
- 安卓高德地图的使用
- 英语背记(四)
- 封装string类
- 51nod 1625 夹克爷发红包,暴力+贪心
- 线程的创建与使用