HBase Coprocessors机制
来源:互联网 发布:python廖雪峰 pdf 编辑:程序博客网 时间:2024/05/02 00:27
Apache HBase官方介绍:Coprocessor Introduction
简介
HBase自0.92开始提供coprocessors机制,据官方介绍“What we have built is a framework that provides a library and runtime environment for executing user code within the HBase region server and master processes.”,HBase Coprocessor相当于是基于RegionServer-Master的MapReduce编程框架,其中RegionServer充当mapper的角色,而Master充当reducer的角色。
Coprocessor框架提供基于RPC(0.95以后版本采用Google Protobuf机制而非官方文档所说的方式)的并行计算,0.96版本下提供三种coprocessor:
- Coprocessor: provides region lifecycle management hooks, e.g., regionopen/close/split/flush/compact operations.
- RegionObserver: provides hook for monitor table operations fromclient side, such as table get/put/scan/delete, etc.
- Endpoint: provides on demand triggers for any arbitrary functionexecuted at a region. One use case is column aggregation at regionserver.
需要注意的是:根据HBase版本的不同,coprocessors也有所不同,具体请参考所用版本的api doc。
好处
众所周知,HBase源于Google BigTable,用于存储MASSIVE data,较关系型数据库的严格结构化的多个的小表存储而言,HBase的表往往是一个结构松散的大表,它将一个超级大表按region分布式存储于Hadoop HDFS上,既然是分布式的,那就少不了网络传输,对于聚合操作来说,应用要从各regionserver请求大量数据,再在client端进行聚合;而采用coprocessor的方式,每个regionserver获取符合查询条件的数据后并不直接返回给client,而是各自先做聚合,再将各自的聚合结果返回给client,相较之下,coprocessor机制有以下几个好处:
- regionserver并行聚合,提高了速度;
- regionserver只将聚合结果返回给client,极大地减少了网络传输;
- client只需对regionserver传回的部分聚合结果做处理,极大地减小了client端服务器的压力;
实现流程:
- Define a protocol buffer Service and supporting Message types for the RPC methods. See theprotocol buffer guide for more details on defining services.(定义一个.proto文件,用于定义RPC通信过程中request/response的消息结构,以及提供的service)
- Generate the Service and Message code using the protoc compiler(执行protoc filename.proto --java_out=OUT_DIR生成对应的java文件)
- Implement the generated Service interface in your coprocessor class and implement the
CoprocessorService
interface. TheCoprocessorService.getService()
method should return a reference to the Endpoint's protocol buffer Service instance.(实现protocol中定义的service,其中server端是指各个regionserver并行执行的内容,即mapper,client端是指从各个regionserver获取结果再一次聚合得到最终聚合结果,即reducer)
下面以源码提供的Aggregation为例进行讲解,涉及源码文件包括Aggregate.proto,AggregateProtos.java,AggregateImplementation.java,AggregationClient.java(由于hbase0.96.1.1-cdh5.0.2版本并不直接提供这部分功能,因此源码缺少后两个文件,但可以在其他版本中找到)
Aggregate.proto
message AggregateRequest { required string interpreter_class_name = 1; required Scan scan = 2; optional bytes interpreter_specific_bytes = 3;}message AggregateResponse { repeated bytes first_part = 1; optional bytes second_part = 2;}service AggregateService { rpc GetMax (AggregateRequest) returns (AggregateResponse); rpc GetMin (AggregateRequest) returns (AggregateResponse); rpc GetSum (AggregateRequest) returns (AggregateResponse); rpc GetRowNum (AggregateRequest) returns (AggregateResponse); rpc GetAvg (AggregateRequest) returns (AggregateResponse); rpc GetStd (AggregateRequest) returns (AggregateResponse); rpc GetMedian (AggregateRequest) returns (AggregateResponse);}
解读:这段代码定义了client端与server端通信的协议,代码中包含了请求信息结构AggregateRequest,响应信息结构AggregateResponse以及提供的服务种类AggregateService。
其中AggregateRequest中的interpreter_class_name指的是column interpreter的类名,这个类的作用在于将数据格式从存储类型解析成所需类型,由于HBase底层存储是以字节数组的形式,我们在插入数据时需要将原始数据转变成字节数组,然而同一个数据从不同的数据类型转变成字节数组所得到的结果是不同的,例如字符串"1"转为字节数组的长度为2,而将"1"转成Long类型再转成字节数组的结果长度为8,这是因为java中不同数据类型分配的内存是不同的,具体例子可查看HBase源码中的LongColumnInterpreter.java文件,其中包含了存储类型与目标类型的相互转换,以及目标类型下的基础运算。
AggregateProtos.java
由于代码太长,在这里就不贴出来了。这个文件是由protobuf软件通过终端命令“protoc filename.proto --java_out=OUT_DIR”自动生成的,其作用就是将.proto文件定义的消息结构以及服务转换成对应PL的RPC实现,其中包括如何构建request消息和response消息以及消息包含的内容的处理方式,并且将AggregateService包装成一个抽象类,具体的服务以类的方法的形式提供。
AggregateImplementation.java
@InterfaceAudience.Privatepublic class AggregateImplementation<T, S, P extends Message, Q extends Message, R extends Message> extends AggregateService implements CoprocessorService, Coprocessor { protected static final Log log = LogFactory .getLog(AggregateImplementation.class); private RegionCoprocessorEnvironment env; @Override public void getSum(RpcController controller, AggregateRequest request, RpcCallback<AggregateResponse> done) { AggregateResponse response = null; InternalScanner scanner = null; long sum = 0l; try { ColumnInterpreter<T, S, P, Q, R> ci = constructColumnInterpreterFromRequest(request); S sumVal = null; T temp; Scan scan = ProtobufUtil.toScan(request.getScan()); scanner = env.getRegion().getScanner(scan); byte[] colFamily = scan.getFamilies()[0]; NavigableSet<byte[]> qualifiers = scan.getFamilyMap() .get(colFamily); byte[] qualifier = null; if (qualifiers != null && !qualifiers.isEmpty()) { qualifier = qualifiers.pollFirst(); } List<Cell> results = new ArrayList<Cell>(); boolean hasMoreRows = false; do { hasMoreRows = scanner.next(results); for (Cell kv : results) { temp = ci.getValue(colFamily, qualifier, kv); if (temp != null) sumVal = ci.add(sumVal, ci.castToReturnType(temp)); } results.clear(); } while (hasMoreRows); if (sumVal != null) { response = AggregateResponse .newBuilder() .addFirstPart(ci.getProtoForPromotedType( sumVal) .toByteString()) .build(); } } catch (IOException e) { ResponseConverter.setControllerException(controller, e); } finally { if (scanner != null) { try { scanner.close(); } catch (IOException ignored) { } } } log.debug("Sum from this region is " + env.getRegion().getRegionNameAsString() + ": " + sum); done.run(response); } @SuppressWarnings("unchecked") ColumnInterpreter<T, S, P, Q, R> constructColumnInterpreterFromRequest( AggregateRequest request) throws IOException { String className = request.getInterpreterClassName(); Class<?> cls; try { cls = Class.forName(className); ColumnInterpreter<T, S, P, Q, R> ci = (ColumnInterpreter<T, S, P, Q, R>) cls .newInstance(); if (request.hasInterpreterSpecificBytes()) { ByteString b = request .getInterpreterSpecificBytes(); P initMsg = ProtobufUtil .getParsedGenericInstance( ci.getClass(), 2, b); ci.initialize(initMsg); } return ci; } catch (ClassNotFoundException e) { throw new IOException(e); } catch (InstantiationException e) { throw new IOException(e); } catch (IllegalAccessException e) { throw new IOException(e); } } @Override public Service getService() { return this; } @Override public void start(CoprocessorEnvironment env) throws IOException { if (env instanceof RegionCoprocessorEnvironment) { this.env = (RegionCoprocessorEnvironment) env; } else { throw new CoprocessorException( "Must be loaded on a table region!"); } } @Override public void stop(CoprocessorEnvironment env) throws IOException {// nothing to do }解读:由于代码比较长,这里贴出大的结构和一个例子getSum()的实现,删去其他getMax()等service的实现,但套路都是一样的,该类继承了AggregateProtos.java文件中的AggregateService抽象类,并实现了CoprocessorService和Coprocessor两个接口,其作用就是通过coprocessor框架实现protocol中定义的各个service,其执行环境是RegionCoprocessorEnvironment,即在各个regionserver上执行(聚合),可以看到getSum()执行聚合时用的scanner是一个internalscanner,通过env.getRegion().getScanner(scan)获得,即request传入的scan是一个全局的scan,而regionserver执行聚合时只是针对该regionserver的internalscanner进行,这也是coprocessor机制的核心所在,传回的response中的值就是针对internalscanner的聚合结果。
AggregationClient.java
@InterfaceAudience.Privatepublic class AggregationClient { private static final Log log = LogFactory .getLog(AggregationClient.class); Configuration conf; public AggregationClient(Configuration cfg) { this.conf = cfg; } /** * It sums up the value returned from various regions. In case qualifier * is null, summation of all the column qualifiers in the given family * is done. * * @param tableName * @param ci * @param scan * @return sum <S> * @throws Throwable */ public <R, S, P extends Message, Q extends Message, T extends Message> S sum( final TableName tableName, final ColumnInterpreter<R, S, P, Q, T> ci, final Scan scan) throws Throwable { HTable table = null; try { table = new HTable(conf, tableName); return sum(table, ci, scan); } finally { if (table != null) { table.close(); } } } /** * It sums up the value returned from various regions. In case qualifier * is null, summation of all the column qualifiers in the given family * is done. * * @param table * @param ci * @param scan * @return sum <S> * @throws Throwable */ public <R, S, P extends Message, Q extends Message, T extends Message> S sum( final HTableInterface table, final ColumnInterpreter<R, S, P, Q, T> ci, final Scan scan) throws Throwable { final AggregateRequest requestArg = validateArgAndGetPB(scan, ci, false); class SumCallBack implements Batch.Callback<S> { S sumVal = null; public S getSumResult() { return sumVal; } @Override public synchronized void update(byte[] region, byte[] row, S result) { sumVal = ci.add(sumVal, result); } } SumCallBack sumCallBack = new SumCallBack(); table.coprocessorService(AggregateService.class, scan.getStartRow(), scan.getStopRow(), new Batch.Call<AggregateService, S>() { @Override public S call(AggregateService instance) throws IOException { ServerRpcController controller = new ServerRpcController(); BlockingRpcCallback<AggregateResponse> rpcCallback = new BlockingRpcCallback<AggregateResponse>(); instance.getSum(controller, requestArg, rpcCallback); AggregateResponse response = rpcCallback .get(); if (controller.failedOnException()) { throw controller.getFailedOn(); } if (response.getFirstPartCount() == 0) { return null; } ByteString b = response .getFirstPart(0); T t = ProtobufUtil .getParsedGenericInstance( ci.getClass(), 4, b); S s = ci.getPromotedValueFromProto(t); return s; } }, sumCallBack); return sumCallBack.getSumResult(); } <R, S, P extends Message, Q extends Message, T extends Message> AggregateRequest validateArgAndGetPB( Scan scan, ColumnInterpreter<R, S, P, Q, T> ci, boolean canFamilyBeAbsent) throws IOException { validateParameters(scan, canFamilyBeAbsent); final AggregateRequest.Builder requestBuilder = AggregateRequest .newBuilder(); requestBuilder.setInterpreterClassName(ci.getClass() .getCanonicalName()); P columnInterpreterSpecificData = null; if ((columnInterpreterSpecificData = ci.getRequestData()) != null) { requestBuilder.setInterpreterSpecificBytes(columnInterpreterSpecificData .toByteString()); } requestBuilder.setScan(ProtobufUtil.toScan(scan)); return requestBuilder.build(); } byte[] getBytesFromResponse(ByteString response) { ByteBuffer bb = response.asReadOnlyByteBuffer(); bb.rewind(); byte[] bytes; if (bb.hasArray()) { bytes = bb.array(); } else { bytes = response.toByteArray(); } return bytes; }}解读:由于代码较长,这里也只贴出大体结构和一个sum()的例子,该类的作用就是将columninterpreter以及scan等参数通过validateArgAndGetPB()方法组装成一个RPC request后通过HBase提供的coprocesserService()获取聚合结果。
至于coprocesserService()的具体操作过程等下次再说。
- HBase Coprocessors机制
- HBase Coprocessors
- [HBase] Hbase Coprocessors
- Hbase shell Loading Coprocessors
- HBase 协处理器 (Coprocessors)
- HBase-4.HBase内部机制
- 【HBase】HBase笔记:HBase的Region机制
- HBase机制介绍
- HBase机制介绍
- HBase的WAL机制
- HBase Phoenix 机制
- HBase的并发机制
- HBase的WAL机制
- Hbase 的bulkLoading机制
- hbase存储机制
- HBase WAL机制
- Hbase超时机制
- Hbase WAL 机制记录
- Android去除链接下划线
- Android基础整合项目之节日群发助手(三)
- Wildcard Matching
- 【Android】 保存图片到系统图库, 并立即显示在图库中
- 修改hibernate生成数据库的命名规则
- HBase Coprocessors机制
- 从一个字符串中按字节数截取一部分,但不能截取出半个中文(GBK码表)【笔记】
- Ubuntu 下安装splint
- oracle学习之路(一)
- 义隆金融:利多出尽抑制美元上涨 金价迎短期反弹
- ShellExecute函数介绍
- Eclipse中CVS界面功能描述
- Spring配置文件总结
- hdu 2577 How to Type 如何保证英文输入状态下,可以按最小次数来完成输入