CDH5.8 HBase安装Phoenix

来源:互联网 发布:分治算法的时间复杂度 编辑:程序博客网 时间:2024/06/05 06:15

CDH5.8 HBase安装Phoenix(部署到chd5.9)

 353人阅读 评论(2) 收藏 举报
1、phoenix的简介
hbase的java api或者其语法很难用,可以认为phoenix是一个中间件,提供了访问hbase的另外的语法。本文档为在CDH环境上安装Phoenix。

2、下载CDH版Phoenix
https://github.com/chiastic-security/phoenix-for-cloudera/tree/4.8-HBase-1.2-cdh5.8

3、编译
mvn clean package -DskipTests -Dcdh.flume.version=1.6.0

4、

将编译打包好后的IdeaWorkspace\phoenix-for-cloudera-4.8-HBase-1.2-cdh5.8\phoenix-assembly\target\phoenix-4.8.0-cdh5.8.0.tar.gz解压phoenix-4.8.0-cdh5.8.0


5、将phoenix-4.8.0-cdh5.8.0中的phoenix-4.8.0-cdh5.8.0-server.jar拷贝到每一个RegionServer下/opt/cloudera/parcels/CDH/lib/hbase/lib

6、重启hbase集群
7、将phoenix-4.8.0-cdh5.8.0放到我们集群中的某个目录下,进入phoenix-4.8.0-cdh5.8.0/bin目录,使用sqlline.py连接hbase,成功。

./sqlline.py k1:2181/hbase



8.执行上面一步是出现错误

 .lline.py 10.17.131.114:2181
Setting property: [incremental, false]
Setting property: [isolation, TRANSACTION_READ_COMMITTED]
issuing: !connect jdbc:phoenix::2181 none none org.apache.phoenix.jdbc.PhoenixDriver
Connecting to jdbc:phoenix: 4:2181
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:tx/run/phoenix-4.8.0-cdh5.8.0/phoenix-4.8.0-cdh5.8.0-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/jars/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/12/06 10:19:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Error: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1728)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1589)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1518)
        at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165) (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: org.apache.hadoop.hbase.DoNotRetryIOException: Class org.apache.phoenix.coprocessor.MetaDataRegionObserver cannot be loaded Set hbase.table.sanity.checks to false at conf or table descriptor if you want to bypass sanity checks
        at org.apache.hadoop.hbase.master.HMaster.warnOrThrowExceptionForFailure(HMaster.java:1728)
        at org.apache.hadoop.hbase.master.HMaster.sanityCheckTableDescriptor(HMaster.java:1589)
        at org.apache.hadoop.hbase.master.HMaster.createTable(HMaster.java:1518)
        at org.apache.hadoop.hbase.master.MasterRpcServices.createTable(MasterRpcServices.java:469)
        at org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:55682)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2170)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:109)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:185)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:165)

        at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:111)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.ensureTableCreated(ConnectionQueryServicesImpl.java:1104)
        at org.apache.phoenix.query.ConnectionQueryServicesImpl.createTable(ConnectionQueryServicesImpl.java:1396)
        at org.apache.phoenix.schema.MetaDataClient.createTableInternal(MetaDataClient.java:2302)
        at org.apache.phoenix.schema.MetaDataClient.createTable(MetaDataClient.java:922)
        at org.apache.phoenix.compile.CreateTableCompiler$2.execute(CreateTableCompiler.java:194)
        at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:343)
        at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:331)
        at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:330)
        at org.apache.phoenix.jdbc.PhoenixStatement.executeUpdate(PhoenixStatement.java:1421)
     

客户端 出现 hbase.table.sanity.checks  错误 将该参数 设置为false 客户端正常登录 


0 0