机器学习算法——朴素贝叶斯

来源:互联网 发布:淘宝蜜琪美妆是假货吗 编辑:程序博客网 时间:2024/06/06 21:13

朴素贝叶斯概要如下:

  1. 学习过程采用极大似然估计 (或带拉普拉斯平滑的贝叶斯估计)
  2. 特征的各个维度视作相互独立
  3. 核心是贝叶斯公式,利用p(y|x=?) 正比于 p(x=?|y)*p(y),求出每个 p(y=yi|x=?),取其中最大的那个yi作为预测标签
  4. x=[x1,x2,x3...xd],将每个维度视作独立,则p(x|y) = p(x1|y)*p(x2|y)*...p(xd|y),注意这是一个很强的假设
  5. 针对输入为离散变量的情况比较好用
代码如下:
#%% 朴素贝叶斯(针对离散输入变量)class NaiveBayes(object):    import numpy as np    def __init__(self, train_x, train_y):        self.train_x = train_x        self.train_y = np.array(train_y)        self.dimension = len(train_x[0])        self.n_sample = self.train_y.size        self.labels = np.unique(self.train_y)        # 计算label的先验概率和feature各维度的条件概率        self.pre_prob = self.cal_pre_prob()        self.condi_prob = self.cal_condi_prob()    # 计算y的先验概率    def cal_pre_prob(self):        pre_prob = {}        for y in self.labels:            pre_prob[y] = self.train_y.tolist().count(y)/float(self.train_y.size)        return pre_prob    # 计算特征各维度的条件概率    def cal_condi_prob(self):        condi_prob = {}        dim_x = zip(*self.train_x)        for i,xi in enumerate(dim_x):            xi = np.array(xi)            for xij in np.unique(xi):                bool_xij = xi==xij                for y in self.labels:                    # p(xij|y) 第i个纬度取值为xij的特征                    bool_y = self.train_y==y                    condi_prob[(i,xij,y)] = sum(bool_y&bool_xij)/float(sum(bool_y))        return condi_prob    def predict(self, x):        if len(x)!=self.dimension:            raise 'feature dimension not equal!'        prob = {}        for y in self.labels:            prob[y] = self.pre_prob[y]            for i,xi in enumerate(x):                prob[y] *= self.condi_prob[(i,xi,y)]        # 计算出标签概率最大的那个        print prob        prob_sum = sum(prob.values())        max_label, max_prob = None, 0        for la in prob.keys():            if prob[la] > max_prob:                max_prob = prob[la]                max_label = la        return max_label, max_prob/float(prob_sum)def test_NaiveBayes():    x = [[1,'s'],[1,'m'],[1,'m'],[1,'s'],[1,'s'],[2,'s'],[2,'m'],[2,'m'],         [2,'l'],[2,'l'],[3,'l'],[3,'m'],[3,'m'],[3,'l'],[3,'l']]    y = [-1,-1,1,1,-1,-1,-1] + [1]*7 + [-1]    cls = NaiveBayes(x, y)    new_x = [2,'s']    print cls.predict(new_x)test_NaiveBayes()

结果如下:
{1: 0.02222222222222222, -1: 0.066666666666666666}
(-1, 0.75000000000000011)

0 0
原创粉丝点击