hadoop2.7.3清洗服务器访问日志之partitioner的学习和应用(六)

来源:互联网 发布:印度 网络空间 编辑:程序博客网 时间:2024/05/01 04:38

服务器的访问日志之所以需要清洗,原因有很多,通常日志的格式,日志的生成周期,用户访问的来源等等都是必要的原因.比如,本人就遇到由于服务器的访问平台不同,所以,需要把APP端,web端,h5端的访问日志归为3类,然后各自生成日志文件.

这里就用到了partitioner.

partitioner 根据定义,可以根据自定义的条件,把不同的key 分开,并可以生成不同的集合文件.默认的partitioner是HashPartitioner ,定义如下:

/** Partition keys by their {@link Object#hashCode()}. */public class HashPartitioner<K, V> extends Partitioner<K, V> {  /** Use {@link Object#hashCode()} to partition. */  public int getPartition(K key, V value, int numReduceTasks) {return (key.hashCode() & Integer.MAX_VALUE) % numReduceTasks;  }}

默认的HashPartitioner类不能做到把总的访问日志分成3份的需求,因此,我们自定义这个类的实现.

假如以下是日志文件:

183.136.190.40 - - [18/Mar/2017:03:56:58 +0800] "GET /mobile/api/handle.html HTTP/1.1" 200 574 "-" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.153 Safari/537.36"183.39.91.88 - - [18/Mar/2017:11:06:25 +0800] "GET /pc/api/handle.html HTTP/1.1" 200 964 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"183.39.91.88 - - [18/Mar/2017:11:06:25 +0800] "GET /mobile/css/poposlides.css HTTP/1.1" 200 1855 "http://misbike.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"183.39.91.88 - - [18/Mar/2017:11:06:25 +0800] "GET /pc/js/jquery-1.8.3.min.js HTTP/1.1" 200 37522 "http://misbike.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"183.39.91.88 - - [18/Mar/2017:11:06:25 +0800] "GET /h5/js/poposlides.js HTTP/1.1" 200 1544 "http://misbike.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36"

以上日志每个平台的接口都做了区分,APP端为/mobile/ ,H5端为 /h5/ ,PC端为 /pc/
代码如下:
public class Kpi_partitioner {/** * @ClassName: RequestUrlMapper * @Description: TODO * @author zhouyangzyi@163.com* @date 2017年7月12日*  */public static class RequestUrlMapper extends Mapper<Object, Text, Text, KpiBean> {private KpiBean bean = new KpiBean();private Text word = new Text();public void map(Object key, Text value, Context context) throws IOException, InterruptedException {if(value.toString().indexOf("\\")==-1){//过滤不成功的请求bean = StringHandleUtils.filterLog(value.toString());if(bean.isValid()){String[] fields = value.toString().split(" ");        String request = fields[6];    if(request != null && !"".equals(request)){    word.set(request);                                           Integer requestcount = 1;  //每次出现,次数为1                                bean.setRequestCount(request, requestcount);                context.write(word, bean);    }                }}}}/** * @ClassName: RequestUrlReducer * @Description: TODO * @author zhouyangzyi@163.com* @date 2017年7月12日*  */public static class RequestUrlReducer extends Reducer<Text, KpiBean, Text, KpiBean> {private KpiBean bean = new KpiBean();public void reduce(Text key, Iterable<KpiBean> values, Context context)throws IOException, InterruptedException {int sum = 0;for (KpiBean val : values) {sum += val.getRequestcount();}bean.setRequestCount("", sum);context.write(key, bean);}}/** * @ClassName: KpiPartitioner * @Description: TODO 自定义partitioner* @author zhouyangzyi@163.com* @date 2017年7月12日*  */public static class KpiPartitioner extends Partitioner<Text, KpiBean> {        @Override        public int getPartition(Text key, KpiBean value, int numPartitions) {            //            String str = key.toString();            if(str.indexOf("/mobile/")>-1)            return 0%numPartitions;            if(str.indexOf("/pc/")>-1)            return 1%numPartitions;            if(str.indexOf("/h5/")>-1)            return 2%numPartitions;return 1;        }    }public static void main(String[] args) throws Exception {Configuration conf = new Configuration();Job job = new Job(conf, "request url partitioner");job.setJarByClass(Kpi_partitioner.class);job.setMapperClass(RequestUrlMapper.class);job.setCombinerClass(RequestUrlReducer.class);job.setReducerClass(RequestUrlReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(KpiBean.class);//设置reduce默认的Partitioner  job.setPartitionerClass(KpiPartitioner.class);//此处需要设置reduce的数量  job.setNumReduceTasks(3);FileInputFormat.addInputPath(job, new Path("hdfs://139.199.224.239:9000/user/hadoop/miqilog5Input"));FileOutputFormat.setOutputPath(job, new Path("hdfs://139.199.224.239:9000/user/hadoop/miqilog5Output"));System.exit(job.waitForCompletion(true) ? 0 : 1);}}


效果:


原创粉丝点击