ML实践-Adaptive Linear Neurons(Adaline)
来源:互联网 发布:mac versions 破解版 编辑:程序博客网 时间:2024/06/09 18:09
原理
在万事开头难那篇文中,介绍了一个初级的一层神经网,这是在最初级上面的follow up 版。
增强的点有:
1. Bernard新提出了cost function
2. weights的更新基于线性方程(linear activation function),而不是之前perceptron中的离散方程(unit step function)
Cost Function
Sum of Square Errors(SSE)
其中
理想的状况是,目标方程是U型的。我们可以用梯度下降法找到最小的cost.
梯度下降gradient descent
feature scaling
当
实现
import numpy as npclass AdalineGD(object): """ADAptive LInear NEuron classifier. Parameters ----------- eta : float Learning rate (between 0.0 and 1.0) n_iter : int Passes over the training dataset. Attributes ----------- w_ : 1d-array Weights after fitting. errors_ : list Number of misclassifications in every epoch. """ def __init__(self, eta=0.01, n_iter=50): self.eta = eta self.n_iter = n_iter def fit(self, X, y): """ Fit training data. Parameters ---------- X : {array-like}, shape = [n_samples, n_features] Training vectors, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape = [n_samples]Target values. Returns ------- self : object """ self.w_ = np.zeros(1 + X.shape[1]) self.cost_ = [] for i in range(self.n_iter): output = self.net_input(X) errors = (y - output) #X.T.dot 叉乘 output:向量 self.w_[1:] += self.eta * X.T.dot(errors) self.w_[0] += self.eta * errors.sum() cost = (errors**2).sum() / 2.0 self.cost_.append(cost) return self def net_input(self, X): """Calculate net input""" #np.dot 点乘 output:标量 return np.dot(X, self.w_[1:]) + self.w_[0] def activation(self, X): """Compute linear activation""" return self.net_input(X) def predict(self, X): """Return class label after unit step""" return np.where(self.activation(X) >= 0.0, 1, -1)
重点是weight的更新:
self.w_[1:] += self.eta * X.T.dot(errors)self.w_[0] += self.eta * errors.sum()
和新添的activation function
def activation(self, X): """Compute linear activation""" return self.net_input(X)
测试
>>> fig, ax = plt.subplots(nrows=1, ncols=2, figsize=(8, 4))>>> ada1 = AdalineGD(eta=0.01, n_iter=50).fit(X, y)>>> ax[0].plot(range(1, len(ada1.cost_) + 1),... np.log10(ada1.cost_), marker='o')>>> ax[0].set_xlabel('Epochs')>>> ax[0].set_ylabel('log(Sum-squared-error)')>>> ax[0].set_title('Adaline - Learning rate 0.01')>>> ada2 = AdalineGD(eta=0.0001, n_iter=50).fit(X, y)>>> ax[1].plot(range(1, len(ada2.cost_) + 1),... ada2.cost_, marker='o')>>> ax[1].set_xlabel('Epochs')>>> ax[1].set_ylabel('Sum-squared-error')>>> ax[1].set_title('Adaline - Learning rate 0.0001')>>> plt.show()
左图中,因为learning rate步长太大,发生了overshoot,所以最后没有降下来。
通过feature scaling, 在此也就是标准化特征:
#减去平均数,除以标准差>>> X_std = np.copy(X)>>> X_std[:,0] = (X[:,0] - X[:,0].mean()) / X[:,0].std()>>> X_std[:,1] = (X[:,1] - X[:,1].mean()) / X[:,1].std()
再将模型fit函数输入改为x_std:
ada.fit(X_std, y)
阅读全文
0 0
- ML实践-Adaptive Linear Neurons(Adaline)
- ANN_Adaptive linear Element (Adaline)
- ML实战-Adaline with stochastic gradient descent
- 【Python-ML】自适应线性神经网络(Adaline)
- Neurons, Neural Networks, and Linear Discriminants
- ML学习心得(2)----Linear Regression 和Regularization
- 神经网络:sigmoid neurons(2)
- ML编程作业: Linear Regression
- 线性适应元 - Adaptive Linear Element
- Implementing an Adaptive Linear Neuron in Python
- 神经网络:sigmoid neurons(sigmoid神经元)
- ML:Scikit-Learn 学习笔记(4) --- Linear Regression 线性回归
- ML:aPP:7.线性回归<Linear Regression>
- ML实践-万事开头难
- 系统学习机器学习之神经网络(八) --ADALINE网络
- Hinton Neural Networks课程笔记1c:几种激活函数Linear、Binary、ReLU、Stochastic binary neurons
- Basic Neurons
- Stanford ML - Lecture 1 - Linear regression with one variable
- docker中部署redis集群
- [leetcode]433. Minimum Genetic Mutation
- js学习170709
- Python 七段数码管绘制
- 爬虫简介
- ML实践-Adaptive Linear Neurons(Adaline)
- 如何用通过C++11提供的std::condition_variable实现主线程控制子线程的启动和停止
- Linux系统管理之 crond 与 crontab
- 采购订单执行情况表
- 工厂模式
- spring使用FreeMarker模板发送邮件及附件笔记
- linux定时任务crontab命令和crond服务详解
- js学习1707091223
- UnhookWindowsHookEx