深度学习机器学习:softmax和log_softmax区分

来源:互联网 发布:c# 高级编程教程 编辑:程序博客网 时间:2024/05/16 05:58

softmax 函数

又称为 normalized exponential function:is a generalization of the logistic function that “squashes” a K-dimensional vector zof arbitrary real values to a K-dimensional vector σ(z) of real values in the range [0, 1] that add up to 1. The function is given by

σ(z)j=ezjKk=1ezkforj=1,,K.

很显然,这个式子将一个n维的张量输入转化为n维的数,其中每个数的范围为0-1,所有数加起来为1。可以理解为为一种概率分布(probability distribution),比如一个多 label 的分类任务(比如手写字符识别0-9),其结果对应着分类结果为j的概率。

In probability theory, the output of the softmax function can be used to represent a categorical distribution – that is, a probability distribution over K different possible outcomes. In fact, it is the gradient-log-normalizer of the categorical probability distribution.[further explanation needed]

The softmax function is used in various multiclass classification methods, such as multinomial logistic regression (also known as softmax regression)[1]:206–209 [1], multiclass linear discriminant analysis, naive Bayes classifiers, and artificial neural networks.[2] Specifically, in multinomial logistic regression and linear discriminant analysis, the input to the function is the result of K distinct linear functions, and the predicted probability for the j’th class given a sample vector x and a weighting vector w[further explanation needed] is:

下面这个函数是通过向量版的softmax,与之前不同的是这里的x、w是特定维数的向量,输入的向量都在k维的空间中。

P(y=jx)=ex
原创粉丝点击