聚类-混合高斯模型(GMM)

来源:互联网 发布:define() php 编辑:程序博客网 时间:2024/05/18 00:18

EM算法

参考:
http://www.cnblogs.com/jerrylead/archive/2011/04/06/2006936.html
http://www.cnblogs.com/jerrylead/archive/2011/04/06/2006924.html
http://blog.csdn.net/gugugujiawei/article/details/45583051

给定的训练样本是{样本},样例间独立,我们想找到每个样例隐含的类别z,使得p(x,z)最大。
p(x,z)的最大似然估计如下:
极大似然函数
第一步是对极大似然取对数,第二步是对每个样例的每个可能类别z求联合分布概率和。但是直接求sita一般比较困难,因为有隐藏变量z存在,但是一般确定了z后,求解就容易了。

EM是一种解决存在隐含变量优化问题的有效方法。虽然不能直接最大化l(sita),我们可以不断地建立L的下确界(E步),然后优化下界(M步),但是一般确定了z,求解就容易了。

对于每一个样例i,让Qi表示该样例隐含变量z的某种分布,Qi满足的条件是Qi约束条件
根据Jenson不等式,得到如下公式:
这里写图片描述
求当等式成立时的Qi(zi)。
可以看做是对这里写图片描述求了下界。
可求出公式:这里写图片描述
至此,推出了在固定这里写图片描述后,每个样本属于各个z(分布)的概率。这一步即是E步,接下来M步,在给定的每个样本属于各个分布的情况下,调整这里写图片描述,去极大化这里写图片描述 的下界。

Gmm的代码示例如下:

from numpy import *def loadDataSet(fileName):    dataSet = []    fr = open(fileName)    for line in fr.readlines():        curLine = line.strip().split('\t')        fltLine = list(map(float, curLine))        dataSet.append(fltLine)    return dataSetclass GMM:    def __init__(self, k=4, eps=0.00001):        self.k = k        self.eps = eps    def fit_EM(self, X, max_iters=1000):        # n = number of data-points, d = dimension of data points        n, d = X.shape        # randomly choose the starting centroids/means        ## as 3 of the points from datasets        mu = X[np.random.choice(n, self.k, False), :]        # initialize the covariance matrices for each gaussians        Sigma = [np.eye(d)] * self.k        # initialize the probabilities/weights for each gaussians        w = [1. / self.k] * self.k        # responsibility matrix is initialized to all zeros        # we have responsibility for each of n points for eack of k gaussians        R = np.zeros((n, self.k))        ### log_likelihoods        log_likelihoods = []        P = lambda mu, s: np.linalg.det(s) ** -.5 * (2 * np.pi) ** (-d / 2) \                          * np.exp(-.5 * np.einsum('ij, ij -> i', \                                                   X - mu, np.dot(np.linalg.inv(s), (X - mu).T).T))        # Iterate till max_iters iterations        while len(log_likelihoods) < max_iters:            # E - Step            ## Vectorized implementation of e-step equation to calculate the            ## membership for each of k -gaussians            for k in range(self.k):                a = w[k] * P(mu[k], Sigma[k])                R[:, k] = a            ### Likelihood computation            log_likelihood = np.sum(np.log(np.sum(R, axis=1)))            log_likelihoods.append(log_likelihood)            ## Normalize so that the responsibility matrix is row stochastic            R = (R.T / np.sum(R, axis=1)).T            ## The number of datapoints belonging to each gaussian            N_ks = np.sum(R, axis=0)            # M Step            ## calculate the new mean and covariance for each gaussian by            ## utilizing the new responsibilities            for k in range(self.k):                ## means                mu[k] = 1. / N_ks[k] * np.sum(R[:, k] * X.T, axis=1).T                x_mu = np.matrix(X - mu[k])                ## covariances                Sigma[k] = np.array(1 / N_ks[k] * np.dot(np.multiply(x_mu.T, R[:, k]), x_mu))                ## and finally the probabilities                w[k] = 1. / n * N_ks[k]            # check for convergence            if len(log_likelihoods) < 2: continue            if np.abs(log_likelihood - log_likelihoods[-2]) < self.eps: break        ## bind all results together        from collections import namedtuple        self.params = namedtuple('params', ['mu', 'Sigma', 'w', 'log_likelihoods', 'num_iters'])        self.params.mu = mu        self.params.Sigma = Sigma        self.params.w = w        self.params.log_likelihoods = log_likelihoods        self.params.num_iters = len(log_likelihoods)        return self.params    def plot_log_likelihood(self):        import pylab as plt        plt.plot(self.params.log_likelihoods)        plt.title('Log Likelihood vs iteration plot')        plt.xlabel('Iterations')        plt.ylabel('log likelihood')        plt.show()    def predict(self, x):        p = lambda mu, s: np.linalg.det(s) ** - 0.5 * (2 * np.pi) ** \                                                      (-len(x) / 2) * np.exp(-0.5 * np.dot(x - mu, \                                                                                           np.dot(np.linalg.inv(s),                                                                                                  x - mu)))        probs = np.array([w * p(mu, s) for mu, s, w in \                          zip(self.params.mu, self.params.Sigma, self.params.w)])        return probs / np.sum(probs)
0 0
原创粉丝点击