神经网络(2)

来源:互联网 发布:java移植安卓游戏 编辑:程序博客网 时间:2024/05/29 05:54
"""Update the network's weights and biases by applying gradient descent using backpropagation to a single mini batch.The update equations usedclarification: sigma_x(f) means sum of all f(x)w = w - eta / m * sigma_x(partial derivation of weight corresponding to cost function)b = b - eta / m * sigma_x(partial derivation of bias corresponding to cost function):param mini_batch: a list of tuples "(x, y):param eta: learning rate:return:"""# init nabla bias vector and nabla weightsnabla_b = [np.zeros(b.shape) for b in self.biases]nabla_w = [np.zeros(w.shape) for w in self.weights]# for every data (x,y) in mini batchfor x, y in mini_batch:    # delta_nabla_b is 'partial derivation of weight corresponding to cost function' related to just one line date    # we need to sum it up for all the data    delta_nabla_b, delta_nabla_w = self.backprop(x, y)    # so we accumulate it each time a data is passed in    nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]    # we do the same for the weight    nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]
0 0