Phoenix查询抛InsufficientMemoryException异常

来源:互联网 发布:电气控制图绘制软件 编辑:程序博客网 时间:2024/05/16 18:32

测试集群中使用Phoenix进行数据查询抛出如下异常:

org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: C_PICRECORD,,1463484729784.fccf0ba342b4fe6660edb4081ac498ad.: Requested memory of 142000000 bytes could not be allocated from remaining memory of 710007810 bytes from global pool of 781752729 bytes after waiting for 10000ms.    at org.apache.phoenix.util.ServerUtil.createIOException(ServerUtil.java:84)    at org.apache.phoenix.util.ServerUtil.throwIOException(ServerUtil.java:52)    at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:205)    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$52.call(RegionCoprocessorHost.java:1308)    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1663)    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1738)    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1702)    at org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.postScannerOpen(RegionCoprocessorHost.java:1303)    at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2119)    at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31443)    at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2035)    at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)    at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)    at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)    at java.lang.Thread.run(Thread.java:745)Caused by: org.apache.phoenix.memory.InsufficientMemoryException: Requested memory of 142000000 bytes could not be allocated from remaining memory of 710007810 bytes from global pool of 781752729 bytes after waiting for 10000ms.    at org.apache.phoenix.memory.GlobalMemoryManager.allocateBytes(GlobalMemoryManager.java:78)    at org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:97)    at org.apache.phoenix.memory.GlobalMemoryManager.allocate(GlobalMemoryManager.java:103)    at org.apache.phoenix.coprocessor.ScanRegionObserver.getTopNScanner(ScanRegionObserver.java:233)    at org.apache.phoenix.coprocessor.ScanRegionObserver.doPostScannerOpen(ScanRegionObserver.java:220)    at org.apache.phoenix.coprocessor.BaseScannerRegionObserver.postScannerOpen(BaseScannerRegionObserver.java:178)    ... 12 more; 

数据量千万级别(列数为44),构建了二级索引表,确保查询能够走二级索引表,查询语句为:

SELECT    ID,    WAY_NUM,    CAR_NUM,    ORG_NAME,    ORG_ID,    CAR_SPEED,    CAR_NUM_TYPE,    CAR_NUM_COLOR,    CAR_TYPE,    CAR_COLOR,    CAR_LOGO_TYPE,    LONGITUDE,    LATITUDE,    CAR_IMG_URL,    CAP_DATE,    GROUPID,    DEV_CHN_NAMEFROM    C_PICRECORDWHERE    CAP_DATE >= '2016-05-05 11:14:40'AND CAP_DATE <= '2016-05-12 11:14:40'AND CAR_NUM = '粤VFX350'AND CAP_TYPE=0ORDER BY CAP_DATE ASCLIMIT 100000

报错原因为使用了ORDER BY。

google论坛显示如下解决方案:
https://groups.google.com/forum/#!topic/phoenix-hbase-user/KEiDCLVXf6k

phoenix.query.maxGlobalMemoryPercentage 调整为40%

<property>    <name>phoenix.query.maxGlobalMemoryPercentage</name>    <value>40</value></property>

【注意】 必须在hbase集群的regionserver hbase-site.xml里面配置该项并重启hbase集群

经过调整后暂时没出现这个异常。

0 0