Hbase访问方式之Mapreduce
来源:互联网 发布:人工智能后空翻 编辑:程序博客网 时间:2024/05/22 09:43
概述:
Hbase对Mapreduce API进行了扩展,方便Mapreduce任务读写HTable数据。
一个简单示例:
说明:从日志表中,统计每个IP访问网站目录的总数
package man.ludq.hbase;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.client.Put;import org.apache.hadoop.hbase.client.Result;import org.apache.hadoop.hbase.client.Scan;import org.apache.hadoop.hbase.io.ImmutableBytesWritable;import org.apache.hadoop.hbase.mapreduce.TableMapReduceUtil;import org.apache.hadoop.hbase.mapreduce.TableMapper;import org.apache.hadoop.hbase.mapreduce.TableReducer;import org.apache.hadoop.hbase.util.Bytes;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;public class ExampleTotalMapReduce{public static void main(String[] args) {try{Configuration config = HBaseConfiguration.create();Job job = new Job(config,"ExampleSummary");job.setJarByClass(ExampleTotalMapReduce.class); // class that contains mapper and reducerScan scan = new Scan();scan.setCaching(500); // 1 is the default in Scan, which will be bad for MapReduce jobsscan.setCacheBlocks(false); // don't set to true for MR jobs// set other scan attrs//scan.addColumn(family, qualifier);TableMapReduceUtil.initTableMapperJob("access-log", // input tablescan, // Scan instance to control CF and attribute selectionMyMapper.class, // mapper classText.class, // mapper output keyIntWritable.class, // mapper output valuejob);TableMapReduceUtil.initTableReducerJob("total-access", // output tableMyTableReducer.class, // reducer classjob);job.setNumReduceTasks(1); // at least one, adjust as requiredboolean b = job.waitForCompletion(true);if (!b) {throw new IOException("error with job!");} } catch(Exception e){e.printStackTrace();}}public static class MyMapper extends TableMapper<Text, IntWritable> {private final IntWritable ONE = new IntWritable(1);private Text text = new Text();public void map(ImmutableBytesWritable row, Result value, Context context) throws IOException, InterruptedException {String ip = Bytes.toString(row.get()).split("-")[0];String url = new String(value.getValue(Bytes.toBytes("info"), Bytes.toBytes("url")));text.set(ip+"&"+url);context.write(text, ONE);}}public static class MyTableReducer extends TableReducer<Text, IntWritable, ImmutableBytesWritable> {public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}Put put = new Put(key.getBytes());put.add(Bytes.toBytes("info"), Bytes.toBytes("count"), Bytes.toBytes(String.valueOf(sum)));context.write(null, put);}}}
参考文档:
1、Mapreduce读取和写入Hbase(从A表读取数据,统计结果放入B表,非常详细,附有代码说明以及流程)
http://sujee.net/tech/articles/hadoop/hbase-map-reduce-freq-counter/
2、Mapreduce操作Hbase(官方文档,包括 读/读写/多表输出/输出到文件/输出到RDBMS/Job中访问其他的HBase Tables)
http://abloz.com/hbase/book.html#mapreduce.example
0 0
- Hbase访问方式之Mapreduce
- Hbase访问方式之Mapreduce
- Hbase访问方式之Mapreduce
- (转)Hbase访问方式之Mapreduce
- Hbase访问方式之Hbase shell
- Hbase访问方式之Hbase shell
- Hbase访问方式之Hbase shell
- Hbase访问方式之Java API
- Hbase访问方式之Java API
- HBase之MapReduce
- Hbase的访问方式
- HBase导入大数据三大方式之(三)——mapreduce+completebulkload 方式
- HBase整合MapReduce之建立HBase索引
- Hbase编程入门之MapReduce
- Hbase编程入门之MapReduce
- hadoop学习笔记之mapreduce 基于hbase日志数据的最频繁访问ip统计
- 【HBase基础教程】6、HBase之读取MapReduce数据写入HBase
- HBase 之访问控制
- dwr 与spring 完美整合
- js验证电话号码
- Struts2中ModelDriven的使用
- VS 提示0xcdcdcdcd的错误
- Yii cookie
- Hbase访问方式之Mapreduce
- java 解析properties文件 工具类 通用
- (10)Java泛型-Map集合-集合框架工具类-可变参数-静态导入
- 将反射用于工厂模式
- Oracle Join Methods
- IOS-将长文字转化成图片方法
- [Leetcode] Multiply Strings (Java)
- ROW_NUMBER() OVER函数的基本用法
- Oracle聚合函数RANK和dense_rank的使用