kl距离以及零值处理方法

来源:互联网 发布:3d室内设计软件 编辑:程序博客网 时间:2024/04/30 07:45
粘贴自:http://www.cppblog.com/sosi/archive/2010/10/16/130127.aspx

In probabilitytheory and informationtheory, the Kullback–Leibler divergence[1][2][3] (alsoinformation divergence,information gain, relative entropy, or KLIC)is a non-symmetric measure of the difference between twoprobability distributions P and Q. KL measures the expected numberof extra bits required to code samplesfrom P when using a code based on Q, rather than using a code basedon P. Typically P represents the "true" distribution of data,observations, or a precise calculated theoretical distribution. Themeasure Q typically represents a theory, model, description, orapproximation of P.

Although it is often intuited as a distancemetric, the KL divergence is not atrue metric –for example, the KL from P to Q is not necessarily the same as theKL from Q to P.

KL divergence is a special case of a broader classof divergences called f-divergences.Originally introduced by SolomonKullbackand RichardLeibler in 1951 as the directed divergencebetween two distributions, it is not the same asa divergence incalculus.However, the KL divergence can be derived fromthe Bregmandivergence.

 

 

注意P通常指数据集,我们已有的数据集,Q表示理论结果,所以KL divergence的物理含义就是当用Q来编码P中的采样时,比用P来编码P中的采用需要多用的位数!

 

KL散度,也有人称为KL距离,但是它并不是严格的距离概念,其不满足三角不等式

 

KL散度是不对称的,当然,如果希望把它变对称,

Ds(p1, p2) = [D(p1, p2) + D(p2, p1)] / 2

 

下面是KL散度的离散和连续定义!

D_{\mathrm{KL}}(P\|Q) = \sum_i P(i) \log \frac{P(i)}{Q(i)}. \!

D_{\mathrm{KL}}(P\|Q) = \int_{-\infty}^\infty p(x) \log \frac{p(x)}{q(x)} \; dx, \!

注意的一点是p(x) 和q(x)分别是pq两个随机变量的PDF,D(P||Q)是一个数值,而不是一个函数,看下图!

 

注意:KL Area to be Integrated!

 

File:KL-Gauss-Example.png

 

KL 散度一个很强大的性质:

The Kullback–Leibler divergence is always non-negative,

D_{\mathrm{KL}}(P\|Q) \geq 0, \,

a result known as Gibbs'inequality,with DKL(P||Q)zero if and onlyif P Q.

 

计算KL散度的时候,注意问题是在稀疏数据集上KL散度计算通常会出现分母为零的情况!

 

 

 

 

Matlab中的函数:KLDIV给出了两个分布的KL散度

Description

KLDIV Kullback-Leibler or Jensen-Shannondivergence between two distributions.

KLDIV(X,P1,P2) returns the Kullback-Leiblerdivergence between two distributions specified over the M variablevalues in vector X. P1 is a length-M vector of probabilitiesrepresenting distribution 1, and P2 is a length-M vector ofprobabilities representing distribution 2. Thus, the probability ofvalue X(i) is P1(i) for distribution 1 and P2(i) for distribution2. The Kullback-Leibler divergence is given by:

  KL(P1(x),P2(x)) = sum[P1(x).log(P1(x)/P2(x))]

If X contains duplicate values, there will bean warning message, and these values will be treated as distinctvalues. (I.e., the actual values do not enter into the computation,but the probabilities for the two duplicate values will beconsidered as probabilities corresponding to two unique values.)The elements of probability vectors P1 and P2 must each sum to 1+/- .00001.

A "log of zero" warning will be thrown forzero-valued probabilities. Handle this however you wish. Adding'eps' or some other small value to all probabilities seemsreasonable. (Renormalize if necessary.)

KLDIV(X,P1,P2,'sym') returns a symmetricvariant of the Kullback-Leibler divergence, given by[KL(P1,P2)+KL(P2,P1)]/2. See Johnson and Sinanovic(2001).

KLDIV(X,P1,P2,'js') returns theJensen-Shannon divergence, given by [KL(P1,Q)+KL(P2,Q)]/2, where Q= (P1+P2)/2. See the Wikipedia article for "Kullback–Leiblerdivergence". This is equal to 1/2 the so-called "Jeffreydivergence." See Rubner et al. (2000).

EXAMPLE: Let the event set and probabilitysets be as follow: 
   X = [1 2 3 34]'; 
   P1 =ones(5,1)/5; 
   P2 = [0 0 .5 .2 .3]' +eps; 
Note that the event set here has duplicate values (two 3's). Thesewill be treated as DISTINCT events by KLDIV. If you want these tobe treated as the SAME event, you will need to collapse theirprobabilities together before running KLDIV. One way to do this isto use UNIQUE to find the set of unique events, and then iterateover that set, summing probabilities for each instance of eachunique event. Here, we just leave the duplicate values to betreated independently (the default): 
   KL =kldiv(X,P1,P2); 
   KL = 
       19.4899

Note also that we avoided the log-of-zerowarning by adding 'eps' to all probability values in P2. We didn'tneed to renormalize because we're still within the sum-to-onetolerance.

REFERENCES: 
1) Cover, T.M. and J.A. Thomas. "Elements of Information Theory,"Wiley, 1991. 
2) Johnson, D.H. and S. Sinanovic. "Symmetrizing theKullback-Leibler distance." IEEE Transactions on Information Theory(Submitted). 
3) Rubner, Y., Tomasi, C., and Guibas, L. J., 2000. "The EarthMover's distance as a metric for image retrieval." InternationalJournal of Computer Vision, 40(2): 99-121. 
4) Kullback–Leibler" style="color: rgb(0,111, 247); font-size:0.9em;">http://en.wikipedia.org/wiki/Kullback–Leibler_divergence">Kullback–Leibler divergence.Wikipedia, The Free Encyclopedia.

0 0
原创粉丝点击