Mahout分布式程序开发 聚类Kmeans

来源:互联网 发布:173VPN网络加速器 编辑:程序博客网 时间:2024/05/22 11:30

前言
Mahout是基于Hadoop用于机器学习的程序开发框架,Mahout封装了3大类的机器学习算法,其中包括聚类算法。Kmeans是我们经常会提到的聚类算法之一,特别是处理未知数据集的时候,都会先聚类一下,看看数据集会有些什么样的规则。
本文主要讲解,基于Mahout程序开发,实现分布式的Kmeans算法。


目录

  1. 聚类算法Kmeans
  2. Mahout开发环境介绍
  3. 用Mahout实现聚类算法Kmeans
  4. 用R语言可视化结果
  5. 模板项目上传github

1.聚类算法Kmeans

聚类分析是数据挖掘及机器学习领域内的重点问题之一,在数据挖掘、模式识别、决策支持、机器学习及图像分割等领域有广泛的应用,是最重要的数据分析方法之一。聚类是在给定的数据集合中寻找同类的数据子集合,每一个子集合形成一个类簇,同类簇中的数据具有更大的相似性。聚类算法大体上可分为基于划分的方法、基于层次的方法、基于密度的方法、基于网格的方法以及基于模型的方法。
k-means algorithm算法是一种得到最广泛使用的基于划分的聚类算法,把n个对象分为k个簇,以使簇内具有较高的相似度。相似度的计算根据一个簇中对象的平均值来进行。它与处理混合正态分布的最大期望算法很相似,因为他们都试图找到数据中自然聚类的中心。
算法首先随机地选择k个对象,每个对象初始地代表了一个簇的平均值或中心。对剩余的每个对象根据其与各个簇中心的距离,将它赋给最近的簇,然后重新计算每个簇的平均值。这个过程不断重复,直到准则函数收敛。


2.Mahout开发环境介绍

接上篇文章:Mahout分布式程序开发 基于物品的协同过滤ItemCF
所有环境变量和系统配置与上文一致!


3.用Mahout实现聚类算法Kmeans

实现步骤:

  1. 准备数据文件: randomData.csv
  2. Java程序:KmeansHadoop.java
  3. 运行程序
  4. 聚类结果解读
  5. HDFS产生的目录

1).准备数据文件:randomData.csv
数据文件randomData.csv,由R语言通过“随机正太分布函数”程序生成,单机内存实验请参考文章:用Maven构建Mahout项目
原始数据文件:这里只截取一部分数据。

~ vi datafile/randomData.csv-0.883033363823402 -3.31967192630249-2.39312626419456 3.347268611188712.66976353341256 1.85144276077058-1.09922906899594 -6.06261735207489-4.36361936997216 1.90509905380532-0.00351835125495037 -0.610105996559153-2.9962958796338 -3.60959839525735-3.27529418132066 0.02300997996417992.17665594420569 6.77290756817957-2.47862038335637 2.534318331672785.53654901906814 2.650897855824745.66257474538338 6.86783609641077-0.558946883114376 1.223328194162375.11728525486132 3.746638715847681.91240516693351 2.95874731384062-2.49747101306535 2.050065047568753.98781883213459 1.007809389463665.47470532716682 5.35084411045171

注:由于Mahout中kmeans算法,默认的分隔符是空格,因此我把逗号分隔的数据文件,改为以空格为分隔的。
2).Java程序:KmeansHadoop.java
Kmeans的算法实现,请查看Mahout in Action。
这里写图片描述

package org.conan.mymahout.cluster08;import org.apache.hadoop.fs.Path;import org.apache.hadoop.mapred.JobConf;import org.apache.mahout.clustering.conversion.InputDriver;import org.apache.mahout.clustering.kmeans.KMeansDriver;import org.apache.mahout.clustering.kmeans.RandomSeedGenerator;import org.apache.mahout.common.distance.DistanceMeasure;import org.apache.mahout.common.distance.EuclideanDistanceMeasure;import org.apache.mahout.utils.clustering.ClusterDumper;import org.conan.mymahout.hdfs.HdfsDAO;import org.conan.mymahout.recommendation.ItemCFHadoop;public class KmeansHadoop {    private static final String HDFS = "hdfs://192.168.1.210:9000";    public static void main(String[] args) throws Exception {        String localFile = "datafile/randomData.csv";        String inPath = HDFS + "/user/hdfs/mix_data";        String seqFile = inPath + "/seqfile";        String seeds = inPath + "/seeds";        String outPath = inPath + "/result/";        String clusteredPoints = outPath + "/clusteredPoints";        JobConf conf = config();        HdfsDAO hdfs = new HdfsDAO(HDFS, conf);        hdfs.rmr(inPath);        hdfs.mkdirs(inPath);        hdfs.copyFile(localFile, inPath);        hdfs.ls(inPath);        InputDriver.runJob(new Path(inPath), new Path(seqFile), "org.apache.mahout.math.RandomAccessSparseVector");        int k = 3;        Path seqFilePath = new Path(seqFile);        Path clustersSeeds = new Path(seeds);        DistanceMeasure measure = new EuclideanDistanceMeasure();        clustersSeeds = RandomSeedGenerator.buildRandom(conf, seqFilePath, clustersSeeds, k, measure);        KMeansDriver.run(conf, seqFilePath, clustersSeeds, new Path(outPath), measure, 0.01, 10, true, 0.01, false);        Path outGlobPath = new Path(outPath, "clusters-*-final");        Path clusteredPointsPath = new Path(clusteredPoints);        System.out.printf("Dumping out clusters from clusters: %s and clusteredPoints: %s\n", outGlobPath, clusteredPointsPath);        ClusterDumper clusterDumper = new ClusterDumper(outGlobPath, clusteredPointsPath);        clusterDumper.printClusters(null);    }    public static JobConf config() {        JobConf conf = new JobConf(ItemCFHadoop.class);        conf.setJobName("ItemCFHadoop");        conf.addResource("classpath:/hadoop/core-site.xml");        conf.addResource("classpath:/hadoop/hdfs-site.xml");        conf.addResource("classpath:/hadoop/mapred-site.xml");        return conf;    }}

3).运行程序
控制台输出:

Delete: hdfs://192.168.1.210:9000/user/hdfs/mix_dataCreate: hdfs://192.168.1.210:9000/user/hdfs/mix_datacopy from: datafile/randomData.csv to hdfs://192.168.1.210:9000/user/hdfs/mix_datals: hdfs://192.168.1.210:9000/user/hdfs/mix_data==========================================================name: hdfs://192.168.1.210:9000/user/hdfs/mix_data/randomData.csv, folder: false, size: 36655==========================================================SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".SLF4J: Defaulting to no-operation (NOP) logger implementationSLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.2013-10-14 15:39:31 org.apache.hadoop.util.NativeCodeLoader 警告: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2013-10-14 15:39:31 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:31 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:31 org.apache.hadoop.io.compress.snappy.LoadSnappy 警告: Snappy native library not loaded2013-10-14 15:39:31 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00012013-10-14 15:39:31 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:31 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:31 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:31 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0001_m_000000_0 is allowed to commit now2013-10-14 15:39:31 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0001_m_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/seqfile2013-10-14 15:39:31 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:31 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0001_m_000000_0' done.2013-10-14 15:39:32 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 0%2013-10-14 15:39:32 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00012013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息: Counters: 112013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=313902013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=366552013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=4759102013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=366552013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=5063502013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=680452013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=02013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=1882849282013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1242013-10-14 15:39:32 org.apache.hadoop.mapred.Counters log信息:     Map output records=10002013-10-14 15:39:32 org.apache.hadoop.io.compress.CodecPool getCompressor信息: Got brand-new compressor2013-10-14 15:39:32 org.apache.hadoop.io.compress.CodecPool getDecompressor信息: Got brand-new decompressor2013-10-14 15:39:32 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:32 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:32 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00022013-10-14 15:39:32 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:32 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:32 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:32 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:33 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:33 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:33 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0002_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:33 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:33 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0002_m_000000_0' done.2013-10-14 15:39:33 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:33 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:33 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:33 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 623 bytes2013-10-14 15:39:33 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:33 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0002_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:33 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:33 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0002_r_000000_0 is allowed to commit now2013-10-14 15:39:33 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0002_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-12013-10-14 15:39:33 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:33 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0002_r_000000_0' done.2013-10-14 15:39:33 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:33 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00022013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=42393032013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=2039632013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=44571682013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1403212013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6272013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6122013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=3765698562013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:33 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:34 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:34 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:34 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00032013-10-14 15:39:34 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:34 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:34 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:34 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:34 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:34 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:34 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0003_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:34 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:34 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0003_m_000000_0' done.2013-10-14 15:39:34 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:34 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:34 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:34 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:34 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:34 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0003_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:34 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:34 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0003_r_000000_0 is allowed to commit now2013-10-14 15:39:34 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0003_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-22013-10-14 15:39:34 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:34 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0003_r_000000_0' done.2013-10-14 15:39:35 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:35 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00032013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=75274672013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=2711932013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=79017442013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1420992013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=5759303682013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:35 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:35 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:35 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:35 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00042013-10-14 15:39:35 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:35 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:35 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:35 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:35 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:35 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:35 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0004_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:35 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:35 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0004_m_000000_0' done.2013-10-14 15:39:35 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:35 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:35 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:35 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:35 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:35 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0004_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:35 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:35 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0004_r_000000_0 is allowed to commit now2013-10-14 15:39:35 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0004_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-32013-10-14 15:39:35 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:35 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0004_r_000000_0' done.2013-10-14 15:39:36 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:36 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00042013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=108156852013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=3381432013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=113463202013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1438772013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=7752908802013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:36 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:36 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:36 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:36 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00052013-10-14 15:39:36 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:36 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:36 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0005_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:36 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0005_m_000000_0' done.2013-10-14 15:39:36 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:36 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:36 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:36 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0005_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:36 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0005_r_000000_0 is allowed to commit now2013-10-14 15:39:36 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0005_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-42013-10-14 15:39:36 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:36 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0005_r_000000_0' done.2013-10-14 15:39:37 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:37 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00052013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=141039032013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=4050932013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=147908882013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1456552013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=9746513922013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:37 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:37 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:37 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:37 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00062013-10-14 15:39:37 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:37 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:37 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0006_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:37 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0006_m_000000_0' done.2013-10-14 15:39:37 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:37 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:37 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:37 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0006_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:37 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0006_r_000000_0 is allowed to commit now2013-10-14 15:39:37 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0006_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-52013-10-14 15:39:37 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:37 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0006_r_000000_0' done.2013-10-14 15:39:38 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:38 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00062013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=173921212013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=4720432013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=182354562013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1474332013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=11740119042013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:38 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:38 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:38 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:38 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00072013-10-14 15:39:38 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:38 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:38 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0007_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:38 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0007_m_000000_0' done.2013-10-14 15:39:38 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:38 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:38 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:38 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0007_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:38 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0007_r_000000_0 is allowed to commit now2013-10-14 15:39:38 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0007_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-62013-10-14 15:39:38 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:38 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0007_r_000000_0' done.2013-10-14 15:39:39 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:39 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00072013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=206803392013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=5389932013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=216800402013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1492112013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=13733724162013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:39 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:39 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:39 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:39 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00082013-10-14 15:39:39 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:39 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:40 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:40 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0008_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:40 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0008_m_000000_0' done.2013-10-14 15:39:40 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:40 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:40 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:40 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0008_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:40 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0008_r_000000_0 is allowed to commit now2013-10-14 15:39:40 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0008_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-72013-10-14 15:39:40 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:40 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0008_r_000000_0' done.2013-10-14 15:39:40 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:40 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00082013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=239685572013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=6059432013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=251246242013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1509892013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=15727329282013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:40 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:41 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:41 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:41 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00092013-10-14 15:39:41 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:41 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:41 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0009_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:41 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0009_m_000000_0' done.2013-10-14 15:39:41 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:41 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:41 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:41 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0009_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:41 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0009_r_000000_0 is allowed to commit now2013-10-14 15:39:41 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0009_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-82013-10-14 15:39:41 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:41 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0009_r_000000_0' done.2013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00092013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=272567752013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=6736692013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=285691922013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1527672013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=17720934402013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:42 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:42 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:42 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00102013-10-14 15:39:42 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:42 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:42 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0010_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0010_m_000000_0' done.2013-10-14 15:39:42 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:42 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0010_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:42 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0010_r_000000_0 is allowed to commit now2013-10-14 15:39:42 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0010_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-92013-10-14 15:39:42 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:42 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0010_r_000000_0' done.2013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00102013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=305449932013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=7410072013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=320137602013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1545452013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=19667353602013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:43 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:43 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:43 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00112013-10-14 15:39:43 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: io.sort.mb = 1002013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: data buffer = 79691776/996147202013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer 信息: record buffer = 262144/3276802013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer flush信息: Starting flush of map output2013-10-14 15:39:43 org.apache.hadoop.mapred.MapTask$MapOutputBuffer sortAndSpill信息: Finished spill 02013-10-14 15:39:43 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0011_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0011_m_000000_0' done.2013-10-14 15:39:43 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Merging 1 sorted segments2013-10-14 15:39:43 org.apache.hadoop.mapred.Merger$MergeQueue merge信息: Down to the last merge-pass, with 1 segments left of total size: 677 bytes2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0011_r_000000_0 is done. And is in the process of commiting2013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:43 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0011_r_000000_0 is allowed to commit now2013-10-14 15:39:43 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0011_r_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-102013-10-14 15:39:43 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: reduce > reduce2013-10-14 15:39:43 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0011_r_000000_0' done.2013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 100%2013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00112013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息: Counters: 192013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=6952013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=338332112013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=8083452013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=354583202013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1563232013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Map output materialized bytes=6812013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Reduce shuffle bytes=02013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=62013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Map output bytes=6662013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=21660958722013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Combine input records=02013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Reduce input records=32013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Reduce input groups=32013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Combine output records=02013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Reduce output records=32013-10-14 15:39:44 org.apache.hadoop.mapred.Counters log信息:     Map output records=32013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient copyAndConfigureFiles警告: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.2013-10-14 15:39:44 org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus信息: Total input paths to process : 12013-10-14 15:39:44 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Running job: job_local_00122013-10-14 15:39:44 org.apache.hadoop.mapred.Task initialize信息:  Using ResourceCalculatorPlugin : null2013-10-14 15:39:44 org.apache.hadoop.mapred.Task done信息: Task:attempt_local_0012_m_000000_0 is done. And is in the process of commiting2013-10-14 15:39:44 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:44 org.apache.hadoop.mapred.Task commit信息: Task attempt_local_0012_m_000000_0 is allowed to commit now2013-10-14 15:39:44 org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter commitTask信息: Saved output of task 'attempt_local_0012_m_000000_0' to hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusteredPoints2013-10-14 15:39:44 org.apache.hadoop.mapred.LocalJobRunner$Job statusUpdate信息: 2013-10-14 15:39:44 org.apache.hadoop.mapred.Task sendDone信息: Task 'attempt_local_0012_m_000000_0' done.2013-10-14 15:39:45 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息:  map 100% reduce 0%2013-10-14 15:39:45 org.apache.hadoop.mapred.JobClient monitorAndPrintJob信息: Job complete: job_local_00122013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息: Counters: 112013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:   File Output Format Counters 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     Bytes Written=415202013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:   File Input Format Counters 2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     Bytes Read=313902013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:   FileSystemCounters2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_READ=185603742013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_READ=4372032013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     FILE_BYTES_WRITTEN=194503252013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     HDFS_BYTES_WRITTEN=1204172013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:   Map-Reduce Framework2013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     Map input records=10002013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     Spilled Records=02013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     Total committed heap usage (bytes)=10830479362013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     SPLIT_RAW_BYTES=1302013-10-14 15:39:45 org.apache.hadoop.mapred.Counters log信息:     Map output records=1000Dumping out clusters from clusters: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-*-final and clusteredPoints: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusteredPointsCL-552{n=443 c=[1.631, -0.412] r=[1.563, 1.407]}    Weight : [props - optional]:  Point:    1.0: [-2.393, 3.347]    1.0: [-4.364, 1.905]    1.0: [-3.275, 0.023]    1.0: [-2.479, 2.534]    1.0: [-0.559, 1.223]    ...CL-847{n=77 c=[-2.953, -0.971] r=[1.767, 2.189]}    Weight : [props - optional]:  Point:    1.0: [-0.883, -3.320]    1.0: [-1.099, -6.063]    1.0: [-0.004, -0.610]    1.0: [-2.996, -3.610]    1.0: [3.988, 1.008]    ...CL-823{n=480 c=[0.219, 2.600] r=[1.479, 1.385]}    Weight : [props - optional]:  Point:    1.0: [2.670, 1.851]    1.0: [2.177, 6.773]    1.0: [5.537, 2.651]    1.0: [5.663, 6.868]    1.0: [5.117, 3.747]    1.0: [1.912, 2.959]    ...

4).聚类结果解读
我们可以把上面的日志分解3个部分解读:
a、初始化环境
b、算法执行
c、打印聚类结果


a、初始化环境
初始化HDFS的数据目录和工作目录,并上传数据文件。

Delete: hdfs://192.168.1.210:9000/user/hdfs/mix_dataCreate: hdfs://192.168.1.210:9000/user/hdfs/mix_datacopy from: datafile/randomData.csv to hdfs://192.168.1.210:9000/user/hdfs/mix_datals: hdfs://192.168.1.210:9000/user/hdfs/mix_data==========================================================name: hdfs://192.168.1.210:9000/user/hdfs/mix_data/randomData.csv, folder: false, size: 36655

b、算法执行
算法执行,有3个步骤:
1):把原始数据randomData.cvs,转成Mahout sequence files of VectorWritable。
2):通过随机方法,选中kmeans的3个中心,作为初始集群。
3):根据迭代次数的设置,执行MapReduce,进行计算。


1):把原始数据randomData.csv,转成Mahout sequence files of VectorWritable。
程序源代码:

 InputDriver.runJob(new Path(inPath), new Path(seqFile), "org.apache.mahout.math.RandomAccessSparseVector");

日志输出:

Job complete: job_local_0001

2):通过随机的方法,选中kmeans的3个中心,做为初始集群
程序源代码:

 int k = 3;        Path seqFilePath = new Path(seqFile);        Path clustersSeeds = new Path(seeds);        DistanceMeasure measure = new EuclideanDistanceMeasure();        clustersSeeds = RandomSeedGenerator.buildRandom(conf, seqFilePath, clustersSeeds, k, measure);

日志输出:

Job complete: job_local_0002

3):根据迭代次数的设置,执行MapReduce,进行计算
程序源代码:

KMeansDriver.run(conf, seqFilePath, clustersSeeds, new Path(outPath), measure, 0.01, 10, true, 0.01, false);

日志输出:

Job complete: job_local_0003Job complete: job_local_0004Job complete: job_local_0005Job complete: job_local_0006Job complete: job_local_0007Job complete: job_local_0008Job complete: job_local_0009Job complete: job_local_0010Job complete: job_local_0011Job complete: job_local_0012

c. 打印聚类结果

Dumping out clusters from clusters: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusters-*-final and clusteredPoints: hdfs://192.168.1.210:9000/user/hdfs/mix_data/result/clusteredPointsCL-552{n=443 c=[1.631, -0.412] r=[1.563, 1.407]}CL-847{n=77 c=[-2.953, -0.971] r=[1.767, 2.189]}CL-823{n=480 c=[0.219, 2.600] r=[1.479, 1.385]}

运行结果:有3个中心。

Cluster1, 包括443个点,中心坐标[1.631, -0.412]
Cluster2, 包括77个点,中心坐标[-2.953, -0.971]
Cluster3, 包括480 个点,中心坐标[0.219, 2.600]


5). HDFS产生的目录

# 根目录~ hadoop fs -ls /user/hdfs/mix_dataFound 4 items-rw-r--r--   3 Administrator supergroup      36655 2013-10-04 15:31 /user/hdfs/mix_data/randomData.csvdrwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/resultdrwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/seedsdrwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/seqfile# 输出目录~ hadoop fs -ls /user/hdfs/mix_data/resultFound 13 items-rw-r--r--   3 Administrator supergroup        194 2013-10-04 15:31 /user/hdfs/mix_data/result/_policydrwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusteredPointsdrwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-0drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-1drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-10-finaldrwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-2drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-3drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-4drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-5drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-6drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-7drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-8drwxr-xr-x   - Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/result/clusters-9# 产生的随机中心种子目录~ hadoop fs -ls /user/hdfs/mix_data/seedsFound 1 items-rw-r--r--   3 Administrator supergroup        599 2013-10-04 15:31 /user/hdfs/mix_data/seeds/part-randomSeed# 输入文件换成Mahout格式文件的目录~ hadoop fs -ls /user/hdfs/mix_data/seqfileFound 2 items-rw-r--r--   3 Administrator supergroup          0 2013-10-04 15:31 /user/hdfs/mix_data/seqfile/_SUCCESS-rw-r--r--   3 Administrator supergroup      31390 2013-10-04 15:31 /user/hdfs/mix_data/seqfile/part-m-00000

4、用R语言可视化结果

分别把聚类后的点,保存到不同的cluster*.cvs文件,然后用R语言画图:

c1<-read.csv(file="cluster1.csv",sep=",",header=FALSE)c2<-read.csv(file="cluster2.csv",sep=",",header=FALSE)c3<-read.csv(file="cluster3.csv",sep=",",header=FALSE)y<-rbind(c1,c2,c3)cols<-c(rep(1,nrow(c1)),rep(2,nrow(c2)),rep(3,nrow(c3)))plot(y, col=c("black","blue","green")[cols])center<-matrix(c(1.631, -0.412,-2.953, -0.971,0.219, 2.600),ncol=2,byrow=TRUE)points(center, col="violetred", pch = 19)

这里写图片描述
从上图中,我们看到有 黑,蓝,绿,三种颜色的空心点,这些点就是原始数据。
3个紫色实点,是Mahout的kmeans后生成的3个中心。
对比文章中用R语言实现的kmeans的分类和中心,都不太一样。 用Maven构建Mahout项目
简单总结一下,在使用kmeans时,根据距离算法,阈值,出始中心,迭代次数的不同,kmeans计算的结果是不相同的。因此,用kmeans算法,我们一般只能得到一个模糊的分类标准,这个标准对于我们认识未知领域的数据集是很有帮助的。不能做为精确衡量数据的指标。


5、模板项目上传github

大家可以下载这个项目,做为开发的起点:

~ git clone https://github.com/bsspirit/maven_mahout_template~ git checkout mahout-0.8
0 0
原创粉丝点击