mahout 源码解析之聚类--聚类分类模型

来源:互联网 发布:超级六耳猕猴秒出数据 编辑:程序博客网 时间:2024/05/01 15:48

聚类分类模型代码主要在包org.apache.mahout.clustering.classify里面,其主要利用一系列的聚类簇和聚类策略对样本进行分类。我们先来看看类ClusterClassifier。

一、ClusterClassifier

ClusterClassifier有四个属性,分别是聚类策略序列化存放文件路径、一系列的聚类簇、聚类簇类别和聚类策略。

private static final String POLICY_FILE_NAME = "_policy";//聚类策略序列化存放路径private List<Cluster> models;//一系列的聚类簇模型private String modelClass;//模型类别private ClusteringPolicy policy;//聚类策略

聚类模型参见mahout0.7源码解析之聚类--聚类模型,聚类策略参见mahout0.7源码解析之聚类--聚类模型。

此类继承AbstractVectorClassifier,并实现了接口OnlineLearner和Writable。在这个类里面我们重点需要关注的函数有classify、classifyScalar和train。

classify主要是利用ClusterClassifier里面的聚类模型给样本进行分类,返回的结果为每个聚类模型对此样本的分类结果。

classifyScalar主要用于二元分类,返回结果为样本属于第一个类别的概率。

train函数有很多不同参数形式,实质就是用指定的聚类模型对样本进行分类,有时还需要带上此样本的权重。

此类还提供了对聚类策略的序列化和反序列化函数,readPolicy和writePolicy。


二、ClusterClassificationDriver

ClusterClassificationDriver利用了ClusterClassifier对样本进行分类,即将样本分到不同的簇中。其也提供了单机版本和Map-Reduce版本的算法。我们还是从run函数开始。

public static void run(Path input, Path clusteringOutputPath, Path output,Double clusterClassificationThreshold, boolean emitMostLikely,boolean runSequential) throws IOException, InterruptedException,ClassNotFoundException {Configuration conf = new Configuration();// 是否采用Map-Reduceif (runSequential) {classifyClusterSeq(conf, input, clusteringOutputPath, output,clusterClassificationThreshold, emitMostLikely);} else {classifyClusterMR(conf, input, clusteringOutputPath, output,clusterClassificationThreshold, emitMostLikely);}}

根据我们设定的参数runSequential被赋予真或者假,根据此将调用不同的算法。我们先看单机版的算法。

private static void classifyClusterSeq(Configuration conf, Path input,Path clusters, Path output, Double clusterClassificationThreshold,boolean emitMostLikely) throws IOException {// 从聚类存放位置读取聚类模型List<Cluster> clusterModels = populateClusterModels(clusters, conf);// 读取聚类策略ClusteringPolicy policy = ClusterClassifier.readPolicy(finalClustersPath(conf, clusters));ClusterClassifier clusterClassifier = new ClusterClassifier(clusterModels, policy);selectCluster(input, clusterModels, clusterClassifier, output,clusterClassificationThreshold, emitMostLikely);}

在算法实现中,会先从文件中反序列化出List<Cluster>和ClusteringPolicy,并由此构建出ClusterClassifier,真正的分类在函数selectCluster中。

private static void selectCluster(Path input, List<Cluster> clusterModels,ClusterClassifier clusterClassifier, Path output,Double clusterClassificationThreshold, boolean emitMostLikely)throws IOException {Configuration conf = new Configuration();SequenceFile.Writer writer = new SequenceFile.Writer(input.getFileSystem(conf), conf,new Path(output, "part-m-" + 0), IntWritable.class,WeightedVectorWritable.class);for (VectorWritable vw : new SequenceFileDirValueIterable<VectorWritable>(input, PathType.LIST, PathFilters.logsCRCFilter(), conf)) {// 给向量进行分类Vector pdfPerCluster = clusterClassifier.classify(vw.get());if (shouldClassify(pdfPerCluster, clusterClassificationThreshold)) {classifyAndWrite(clusterModels, clusterClassificationThreshold,emitMostLikely, writer, vw, pdfPerCluster);}}writer.close();}

在此函数中先利用ClusterClassifier计算出每个样本属于每个簇的概率向量,再根据概率向量的最大值是否大于参数设定中的阈值,当大于阈值时再根据先前设定的参数emitMostLikely来看是输出簇属于一个最大概率的簇还是输出所有大于阈值的概率。

private static void selectCluster(Path input, List<Cluster> clusterModels,ClusterClassifier clusterClassifier, Path output,Double clusterClassificationThreshold, boolean emitMostLikely)throws IOException {Configuration conf = new Configuration();SequenceFile.Writer writer = new SequenceFile.Writer(input.getFileSystem(conf), conf,new Path(output, "part-m-" + 0), IntWritable.class,WeightedVectorWritable.class);for (VectorWritable vw : new SequenceFileDirValueIterable<VectorWritable>(input, PathType.LIST, PathFilters.logsCRCFilter(), conf)) {// 给向量进行分类Vector pdfPerCluster = clusterClassifier.classify(vw.get());if (shouldClassify(pdfPerCluster, clusterClassificationThreshold)) {classifyAndWrite(clusterModels, clusterClassificationThreshold,emitMostLikely, writer, vw, pdfPerCluster);}}writer.close();}

在Map-Reduced版本的算法中,map类为ClusterClassificationMapper,reduce数目为0.

private static void classifyClusterMR(Configuration conf, Path input,Path clustersIn, Path output,Double clusterClassificationThreshold, boolean emitMostLikely)throws IOException, InterruptedException, ClassNotFoundException {conf.setFloat(OUTLIER_REMOVAL_THRESHOLD,clusterClassificationThreshold.floatValue());conf.setBoolean(EMIT_MOST_LIKELY, emitMostLikely);conf.set(CLUSTERS_IN, clustersIn.toUri().toString());Job job = new Job(conf,"Cluster Classification Driver running over input: " + input);job.setJarByClass(ClusterClassificationDriver.class);job.setInputFormatClass(SequenceFileInputFormat.class);job.setOutputFormatClass(SequenceFileOutputFormat.class);job.setMapperClass(ClusterClassificationMapper.class);job.setNumReduceTasks(0);job.setOutputKeyClass(IntWritable.class);job.setOutputValueClass(WeightedVectorWritable.class);FileInputFormat.addInputPath(job, input);FileOutputFormat.setOutputPath(job, output);if (!job.waitForCompletion(true)) {throw new InterruptedException("Cluster Classification Driver Job failed processing "+ input);}}
Map函数中跟单机版本的算法几乎一样,也是从文件中反序列化出List<Cluster>和ClusteringPolicy,并由此构建出ClusterClassifier,再利用对样本进行分类。具体代码可参考单机版本的算法。
@Overrideprotected void map(WritableComparable<?> key, VectorWritable vw,Context context) throws IOException, InterruptedException {if (!clusterModels.isEmpty()) {Vector pdfPerCluster = clusterClassifier.classify(vw.get());if (shouldClassify(pdfPerCluster)) {if (emitMostLikely) {int maxValueIndex = pdfPerCluster.maxValueIndex();write(vw, context, maxValueIndex);} else {writeAllAboveThreshold(vw, context, pdfPerCluster);}}}}

三、WeightedVectorWritable

WeightedVectorWritable为带权重的VectorWritable,主要是有时需要该样本赋予不同的权重,不过大部分情况下是不需要的。WeightedPropertyVectorWritable继承了WeightedVectorWritable,在父类的基础上添加了一个Map<Text, Text>类型的properties。