使用MyEclipse开发HBase应用程序

来源:互联网 发布:价值投资数据类app 编辑:程序博客网 时间:2024/06/05 04:58

当第三方访问HBase的时候,首选需要访问ZooKeeper,因为HBase的重要信息保存在ZooKeeper当中。我们知道,ZooKeeper集群的信息由$$HBASE_HOME/conf/hbase-site.xml文件指定。因此需要通过classpath来指定HBase配置文件的位置,即$HBASE_HOME/conf的位置。
使用HBase客户端进行编程的时候,下文指定的JAR包对于程序来说时必需的。除此之外,commons-configuration slf4j等JAR包也常被用到。下面列出对于hbase-1.2.4版本来说需要的JAR包:

这里写图片描述
这里写图片描述
这里写图片描述
这里写图片描述

这些包主要来自两个地方: 1.×/hadoop-2.7.1/share/hadoop/common,×/hadoop-2.7.1/share/hadoop/common/lib目录下;
2.×/hbase-1.2.4/lib目录下。
程序中需要的JAR根据错误提示知道对应缺少类对应包通过buildpath添加到IDE中即可。
下面通过一个实例HBaseTestCase Java Project来演示具体的配置。
(1)添加JAR包
添加JAR包有两种方法,比较简单的是,在HBase工程上,右击HBase工程,弹出BuildPath->ConfigureBuildPath,在对话框中单击Libraries选项卡,在该选项卡下单击Add External JARs按钮,定位到$HBase/lib目录下,并选取上述JAR包,如图:
这里写图片描述
(2)添加hbase-site.xml配置文件
在工程目录下创建Conf文件夹,将$HBase_HOME/conf/目录中的hbase-site.xml文件复制到该文件夹中。通过右键选择Properties->Java BuildPath->Libraries->Add Class Folder,然后勾选Conf文件进行添加。
接下来便可以与普通Java程序一般调用HBase API编写程序了。还可以通过原先HBase Shell与程序操作进行交互。
预备工作在我之前的博客中有介绍,这里就不一一赘述了。首先打开HDFS,然后启动HBase服务。博客地址:伪分布模式下HBase的安装
下面是HBase工程HBaseTestCase Java Project的简单用例即和Hbase shell执行对表的基本操作一直,只不过这里是通过Java代码调用Hbase接口去实现对应功能。
首先介绍一下系统环境:
系统: Ubuntu 16.04
Hadoop:hadoop-2.7.1
HBase:hbase-1.2.4
MyElipse:

MyEclipse Enterprise WorkbenchVersion: 2015 Stable 2.0Build id: 13.0.0-20150518

其次介绍一下项目的结构,如图所示:
这里写图片描述
下面是程序的完整源码:

package cn.edn.ruc.clodcomputing.book.chapter12;import java.io.IOException;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.hbase.HBaseConfiguration;import org.apache.hadoop.hbase.HColumnDescriptor;import org.apache.hadoop.hbase.HTableDescriptor;import org.apache.hadoop.hbase.MasterNotRunningException;import org.apache.hadoop.hbase.ZooKeeperConnectionException;import org.apache.hadoop.hbase.client.Get;import org.apache.hadoop.hbase.client.HBaseAdmin;import org.apache.hadoop.hbase.client.HTable;import org.apache.hadoop.hbase.client.Put;import org.apache.hadoop.hbase.client.Result;import org.apache.hadoop.hbase.client.ResultScanner;import org.apache.hadoop.hbase.client.Scan;import org.apache.hadoop.hbase.util.Bytes;public class HBaseTestCase {    static Configuration cfg=HBaseConfiguration.create();    public static void create(String tablename,String columnFamily) throws MasterNotRunningException, ZooKeeperConnectionException, IOException    {        HBaseAdmin admin=new HBaseAdmin(cfg);        if(admin.tableExists(tablename))        {            System.out.println("table Exists");            System.exit(0);        }else        {            HTableDescriptor tableDesc=new HTableDescriptor(tablename);            tableDesc.addFamily(new HColumnDescriptor(columnFamily));            admin.createTable(tableDesc);            System.out.println("create table success");        }    }    public static void put(String tablename,String row,String columnFamily,String column,String data) throws IOException    {        HTable table=new HTable(cfg,tablename);        Put p1=new Put(Bytes.toBytes(row));        p1.add(Bytes.toBytes(columnFamily),Bytes.toBytes(column),Bytes.toBytes(data));        table.put(p1);        System.out.println("put '"+row+"','"+columnFamily+":"+column+"','"+data+"'");    }    public static void get(String tablename,String row) throws IOException    {        HTable table=new HTable(cfg,tablename);        Get g=new Get(Bytes.toBytes(row));        Result result=table.get(g);        System.out.println("Get: "+result);    }    public static void scan(String tablename) throws IOException    {        HTable table=new HTable(cfg,tablename);        Scan s=new Scan();        ResultScanner rs=table.getScanner(s);        for(Result r:rs)        {            System.out.println("Scan: "+r);        }    }    public static boolean delete(String tablename) throws MasterNotRunningException, ZooKeeperConnectionException, IOException    {        HBaseAdmin admin=new HBaseAdmin(cfg);        if(admin.tableExists(tablename))        {            try            {                admin.disableTable(tablename);                admin.deleteTable(tablename);            }catch(Exception ex)            {                ex.printStackTrace();                return false;            }        }        return true;    }    public static void main(String[] args) {        String tablename="hbase_tb";        String columnFamily="cf";        try        {            HBaseTestCase.create(tablename, columnFamily);            HBaseTestCase.put(tablename, "row1", columnFamily, "cl1", "data");            HBaseTestCase.get(tablename, "row1");            HBaseTestCase.scan(tablename);            if(true==HBaseTestCase.delete(tablename))            {                System.out.println("Delete table:"+tablename+"success!");            }        }catch(Exception e)        {            e.printStackTrace();        }    }}

5个静态函数和一个主函数。
程序执行结果如图所示:

2016-11-30 05:29:52,844 [org.apache.hadoop.util.Shell]-[DEBUG] setsid exited with exit code 02016-11-30 05:29:53,208 [org.apache.hadoop.security.Groups]-[DEBUG]  Creating new Groups object2016-11-30 05:29:53,312 [org.apache.hadoop.util.NativeCodeLoader]-[DEBUG] Trying to load the custom-built native-hadoop library...2016-11-30 05:29:53,314 [org.apache.hadoop.util.NativeCodeLoader]-[DEBUG] Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: no hadoop in java.library.path2016-11-30 05:29:53,314 [org.apache.hadoop.util.NativeCodeLoader]-[DEBUG] java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib2016-11-30 05:29:53,315 [org.apache.hadoop.util.NativeCodeLoader]-[WARN] Unable to load native-hadoop library for your platform... using builtin-java classes where applicable2016-11-30 05:29:53,346 [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]-[DEBUG] Falling back to shell based2016-11-30 05:29:53,359 [org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback]-[DEBUG] Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping2016-11-30 05:29:53,692 [org.apache.hadoop.security.Groups]-[DEBUG] Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=50002016-11-30 05:29:54,082 [org.apache.hadoop.metrics2.lib.MutableMetricsFactory]-[DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of successful kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)2016-11-30 05:29:54,103 [org.apache.hadoop.metrics2.lib.MutableMetricsFactory]-[DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[Rate of failed kerberos logins and latency (milliseconds)], always=false, type=DEFAULT, sampleName=Ops)2016-11-30 05:29:54,104 [org.apache.hadoop.metrics2.lib.MutableMetricsFactory]-[DEBUG] field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(valueName=Time, about=, value=[GetGroups], always=false, type=DEFAULT, sampleName=Ops)2016-11-30 05:29:54,107 [org.apache.hadoop.metrics2.impl.MetricsSystemImpl]-[DEBUG] UgiMetrics, User and group related metrics2016-11-30 05:29:54,387 [org.apache.hadoop.security.authentication.util.KerberosName]-[DEBUG] Kerberos krb5 configuration not found, setting default realm to empty2016-11-30 05:29:54,396 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] hadoop login2016-11-30 05:29:54,397 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] hadoop login commit2016-11-30 05:29:54,415 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] using local user:UnixPrincipal: dtw2016-11-30 05:29:54,416 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] Using user: "UnixPrincipal: dtw" with name dtw2016-11-30 05:29:54,417 [org.apache.hadoop.security.UserGroupInformation$HadoopLoginModule]-[DEBUG] User entry: "dtw"2016-11-30 05:29:54,418 [org.apache.hadoop.security.UserGroupInformation]-[DEBUG] UGI loginUser:dtw (auth:SIMPLE)2016-11-30 05:29:54,581 [org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper]-[INFO] Process identifier=hconnection-0x6a871b75 connecting to ZooKeeper ensemble=localhost:21812016-11-30 05:29:54,595 [org.apache.zookeeper.Environment]-[INFO] Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT2016-11-30 05:29:54,595 [org.apache.zookeeper.Environment]-[INFO] Client environment:host.name=dtw2016-11-30 05:29:54,596 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.version=1.7.0_452016-11-30 05:29:54,596 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.vendor=Oracle Corporation2016-11-30 05:29:54,596 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.home=/home/dtw/MyEclipse2015/binary/com.sun.java.jdk7.linux.x86_64_1.7.0.u45/jre2016-11-30 05:29:54,597 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.class.path=/home/dtw/Workspaces/MyEclipse 2015/HBase/bin:/home/dtw/hbase-1.2.4/lib/zookeeper-3.4.6.jar:/home/dtw/hbase-1.2.4/lib/log4j-1.2.17.jar:/home/dtw/hbase-1.2.4/lib/commons-logging-1.2.jar:/home/dtw/hbase-1.2.4/lib/commons-lang-2.6.jar:/home/dtw/Workspaces/MyEclipse 2015/HBase/Conf:/home/dtw/hadoop-2.7.1/share/hadoop/common/hadoop-common-2.7.1.jar:/home/dtw/hbase-1.2.4/lib/hbase-common-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-client-1.2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/tools/lib/guava-11.0.2.jar:/home/dtw/hbase-1.2.4/lib/commons-collections-3.2.2.jar:/home/dtw/hbase-1.2.4/lib/protobuf-java-2.5.0.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/hadoop-auth-2.7.1.jar:/home/dtw/hbase-1.2.4/lib/hbase-protocol-1.2.4.jar:/home/dtw/hadoop-2.7.1/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/dtw/hbase-1.2.4/lib/hbase-common-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-server-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/netty-all-4.0.23.Final.jar:/home/dtw/hbase-1.2.4/lib/activation-1.1.jar:/home/dtw/hbase-1.2.4/lib/aopalliance-1.0.jar:/home/dtw/hbase-1.2.4/lib/apacheds-i18n-2.0.0-M15.jar:/home/dtw/hbase-1.2.4/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/dtw/hbase-1.2.4/lib/api-asn1-api-1.0.0-M20.jar:/home/dtw/hbase-1.2.4/lib/api-util-1.0.0-M20.jar:/home/dtw/hbase-1.2.4/lib/asm-3.1.jar:/home/dtw/hbase-1.2.4/lib/avro-1.7.4.jar:/home/dtw/hbase-1.2.4/lib/commons-beanutils-1.7.0.jar:/home/dtw/hbase-1.2.4/lib/commons-beanutils-core-1.8.0.jar:/home/dtw/hbase-1.2.4/lib/commons-cli-1.2.jar:/home/dtw/hbase-1.2.4/lib/commons-codec-1.9.jar:/home/dtw/hbase-1.2.4/lib/commons-compress-1.4.1.jar:/home/dtw/hbase-1.2.4/lib/commons-configuration-1.6.jar:/home/dtw/hbase-1.2.4/lib/commons-daemon-1.0.13.jar:/home/dtw/hbase-1.2.4/lib/commons-digester-1.8.jar:/home/dtw/hbase-1.2.4/lib/commons-el-1.0.jar:/home/dtw/hbase-1.2.4/lib/commons-httpclient-3.1.jar:/home/dtw/hbase-1.2.4/lib/commons-io-2.4.jar:/home/dtw/hbase-1.2.4/lib/commons-math3-3.1.1.jar:/home/dtw/hbase-1.2.4/lib/commons-math-2.2.jar:/home/dtw/hbase-1.2.4/lib/commons-net-3.1.jar:/home/dtw/hbase-1.2.4/lib/disruptor-3.3.0.jar:/home/dtw/hbase-1.2.4/lib/findbugs-annotations-1.3.9-1.jar:/home/dtw/hbase-1.2.4/lib/guava-12.0.1.jar:/home/dtw/hbase-1.2.4/lib/guice-3.0.jar:/home/dtw/hbase-1.2.4/lib/guice-servlet-3.0.jar:/home/dtw/hbase-1.2.4/lib/hadoop-annotations-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-auth-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-client-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-hdfs-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-app-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-core-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-jobclient-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-mapreduce-client-shuffle-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-api-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-client-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hadoop-yarn-server-common-2.5.1.jar:/home/dtw/hbase-1.2.4/lib/hbase-annotations-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-annotations-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-examples-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-external-blockcache-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-hadoop2-compat-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-hadoop-compat-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-it-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-it-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-prefix-tree-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-procedure-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-resource-bundle-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-rest-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-server-1.2.4-tests.jar:/home/dtw/hbase-1.2.4/lib/hbase-shell-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/hbase-thrift-1.2.4.jar:/home/dtw/hbase-1.2.4/lib/htrace-core-3.1.0-incubating.jar:/home/dtw/hbase-1.2.4/lib/httpclient-4.2.5.jar:/home/dtw/hbase-1.2.4/lib/httpcore-4.4.1.jar:/home/dtw/hbase-1.2.4/lib/jackson-core-asl-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jackson-jaxrs-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jackson-mapper-asl-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jackson-xc-1.9.13.jar:/home/dtw/hbase-1.2.4/lib/jamon-runtime-2.4.1.jar:/home/dtw/hbase-1.2.4/lib/jasper-compiler-5.5.23.jar:/home/dtw/hbase-1.2.4/lib/jasper-runtime-5.5.23.jar:/home/dtw/hbase-1.2.4/lib/javax.inject-1.jar:/home/dtw/hbase-1.2.4/lib/java-xmlbuilder-0.4.jar:/home/dtw/hbase-1.2.4/lib/jaxb-api-2.2.2.jar:/home/dtw/hbase-1.2.4/lib/jaxb-impl-2.2.3-1.jar:/home/dtw/hbase-1.2.4/lib/jcodings-1.0.8.jar:/home/dtw/hbase-1.2.4/lib/jersey-client-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-core-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-guice-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-json-1.9.jar:/home/dtw/hbase-1.2.4/lib/jersey-server-1.9.jar:/home/dtw/hbase-1.2.4/lib/jets3t-0.9.0.jar:/home/dtw/hbase-1.2.4/lib/jettison-1.3.3.jar:/home/dtw/hbase-1.2.4/lib/jetty-6.1.26.jar:/home/dtw/hbase-1.2.4/lib/jetty-sslengine-6.1.26.jar:/home/dtw/hbase-1.2.4/lib/jetty-util-6.1.26.jar:/home/dtw/hbase-1.2.4/lib/joni-2.1.2.jar:/home/dtw/hbase-1.2.4/lib/jruby-complete-1.6.8.jar:/home/dtw/hbase-1.2.4/lib/jsch-0.1.42.jar:/home/dtw/hbase-1.2.4/lib/jsp-2.1-6.1.14.jar:/home/dtw/hbase-1.2.4/lib/jsp-api-2.1-6.1.14.jar:/home/dtw/hbase-1.2.4/lib/junit-4.12.jar:/home/dtw/hbase-1.2.4/lib/leveldbjni-all-1.8.jar:/home/dtw/hbase-1.2.4/lib/libthrift-0.9.3.jar:/home/dtw/hbase-1.2.4/lib/metrics-core-2.2.0.jar:/home/dtw/hbase-1.2.4/lib/paranamer-2.3.jar:/home/dtw/hbase-1.2.4/lib/servlet-api-2.5.jar:/home/dtw/hbase-1.2.4/lib/servlet-api-2.5-6.1.14.jar:/home/dtw/hbase-1.2.4/lib/slf4j-api-1.7.7.jar:/home/dtw/hbase-1.2.4/lib/snappy-java-1.0.4.1.jar:/home/dtw/hbase-1.2.4/lib/spymemcached-2.11.6.jar:/home/dtw/hbase-1.2.4/lib/xmlenc-0.52.jar:/home/dtw/hbase-1.2.4/lib/xz-1.0.jar2016-11-30 05:29:54,598 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib2016-11-30 05:29:54,598 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.io.tmpdir=/tmp2016-11-30 05:29:54,599 [org.apache.zookeeper.Environment]-[INFO] Client environment:java.compiler=<NA>2016-11-30 05:29:54,599 [org.apache.zookeeper.Environment]-[INFO] Client environment:os.name=Linux2016-11-30 05:29:54,599 [org.apache.zookeeper.Environment]-[INFO] Client environment:os.arch=amd642016-11-30 05:29:54,600 [org.apache.zookeeper.Environment]-[INFO] Client environment:os.version=4.4.0-47-generic2016-11-30 05:29:54,600 [org.apache.zookeeper.Environment]-[INFO] Client environment:user.name=dtw2016-11-30 05:29:54,600 [org.apache.zookeeper.Environment]-[INFO] Client environment:user.home=/home/dtw2016-11-30 05:29:54,601 [org.apache.zookeeper.Environment]-[INFO] Client environment:user.dir=/home/dtw/Workspaces/MyEclipse 2015/HBase2016-11-30 05:29:54,603 [org.apache.zookeeper.ZooKeeper]-[INFO] Initiating client connection, connectString=localhost:2181 sessionTimeout=90000 watcher=hconnection-0x6a871b750x0, quorum=localhost:2181, baseZNode=/hbase2016-11-30 05:29:54,611 [org.apache.zookeeper.ClientCnxn]-[DEBUG] zookeeper.disableAutoWatchReset is false2016-11-30 05:29:54,644 [org.apache.zookeeper.ClientCnxn$SendThread]-[INFO] Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)2016-11-30 05:29:54,662 [org.apache.zookeeper.ClientCnxn$SendThread]-[INFO] Socket connection established to localhost/127.0.0.1:2181, initiating session2016-11-30 05:29:54,665 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Session establishment request sent on localhost/127.0.0.1:21812016-11-30 05:29:54,700 [org.apache.zookeeper.ClientCnxn$SendThread]-[INFO] Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x158b1fe53f10006, negotiated timeout = 900002016-11-30 05:29:54,704 [org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]-[DEBUG] hconnection-0x6a871b750x0, quorum=localhost:2181, baseZNode=/hbase Received ZooKeeper Event, type=None, state=SyncConnected, path=null2016-11-30 05:29:54,706 [org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006 connected2016-11-30 05:29:54,721 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 1,3  replyHeader:: 1,42,0  request:: '/hbase/hbaseid,F  response:: s{17,17,1480454991127,1480454991127,0,0,0,0,67,0,17} 2016-11-30 05:29:54,729 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 2,4  replyHeader:: 2,42,0  request:: '/hbase/hbaseid,F  response:: #ffffffff000146d61737465723a3136303030ffffffa7ffffff996cffffffcd37ffffffa617ffffff8f50425546a2430333131343564392d376363392d346534652d613333642d353932393864303834333134,s{17,17,1480454991127,1480454991127,0,0,0,0,67,0,17} 2016-11-30 05:29:55,104 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.client.use.legacy.blockreader.local = false2016-11-30 05:29:55,105 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.client.read.shortcircuit = false2016-11-30 05:29:55,105 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.client.domain.socket.data.traffic = false2016-11-30 05:29:55,105 [org.apache.hadoop.hdfs.DFSClient$Conf]-[DEBUG] dfs.domain.socket.path = 2016-11-30 05:29:55,172 [org.apache.hadoop.io.retry.RetryUtils]-[DEBUG] multipleLinearRandomRetry = null2016-11-30 05:29:55,229 [org.apache.hadoop.ipc.Server]-[DEBUG] rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@5b36cea32016-11-30 05:29:55,239 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] getting client out of cache: org.apache.hadoop.ipc.Client@7a6b653f2016-11-30 05:29:55,836 [org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory]-[DEBUG] Both short-circuit local reads and UNIX domain socket are disabled.2016-11-30 05:29:55,931 [org.apache.hadoop.hbase.ipc.AbstractRpcClient]-[DEBUG] Codec=org.apache.hadoop.hbase.codec.KeyValueCodec@82a5772, compressor=null, tcpKeepAlive=true, tcpNoDelay=true, connectTO=10000, readTO=20000, writeTO=60000, minIdleTimeBeforeClose=120000, maxRetries=0, fallbackAllowed=false, bind address=null2016-11-30 05:29:56,000 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 3,4  replyHeader:: 3,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:56,011 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:56,219 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 4,4  replyHeader:: 4,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:56,219 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:56,424 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 5,4  replyHeader:: 5,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:56,424 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:56,629 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 6,4  replyHeader:: 6,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:56,629 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:56,834 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 7,4  replyHeader:: 7,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:56,835 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:57,039 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 8,4  replyHeader:: 8,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:57,040 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:57,244 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 9,4  replyHeader:: 9,42,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:57,245 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:57,449 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 10,4  replyHeader:: 10,43,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:57,449 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:57,654 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 11,4  replyHeader:: 11,46,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:57,655 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:57,858 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 12,4  replyHeader:: 12,47,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:57,859 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:58,063 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 13,4  replyHeader:: 13,47,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:58,064 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:58,267 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 14,4  replyHeader:: 14,47,-101  request:: '/hbase/meta-region-server,F  response::  2016-11-30 05:29:58,268 [org.apache.hadoop.hbase.zookeeper.ZKUtil]-[DEBUG] hconnection-0x6a871b75-0x158b1fe53f10006, quorum=localhost:2181, baseZNode=/hbase Unable to get data of znode /hbase/meta-region-server because node does not exist (not an error)2016-11-30 05:29:58,474 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 15,4  replyHeader:: 15,50,0  request:: '/hbase/meta-region-server,F  response:: #ffffffff0001a726567696f6e7365727665723a313632303139ffffffee1effffffb9472d2a7050425546afa364747710ffffffc97e18ffffffbcffffff81fffffff9ffffff8fffffff8b2b100183,s{49,49,1480454998413,1480454998413,0,0,0,0,56,0,49} 2016-11-30 05:29:58,501 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 16,8  replyHeader:: 16,51,0  request:: '/hbase,F  response:: v{'meta-region-server,'online-snapshot,'replication,'recovering-regions,'splitWAL,'rs,'backup-masters,'flush-table-proc,'region-in-transition,'draining,'table,'running,'table-lock,'master,'hbaseid} 2016-11-30 05:29:58,894 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Use SIMPLE authentication for service ClientService, sasl=false2016-11-30 05:29:58,946 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Connecting to dtw/127.0.1.1:162012016-11-30 05:29:59,325 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 17,3  replyHeader:: 17,54,0  request:: '/hbase,F  response:: s{2,2,1480454985936,1480454985936,0,15,0,0,0,15,49} 2016-11-30 05:29:59,335 [org.apache.zookeeper.ClientCnxn$SendThread]-[DEBUG] Reading reply sessionid:0x158b1fe53f10006, packet:: clientPath:null serverPath:null finished:false header:: 18,4  replyHeader:: 18,54,0  request:: '/hbase/master,F  response:: #ffffffff000146d61737465723a3136303030ffffffaf4b6affffffb7ffffff98ffffffd7ffffffed4a50425546afa364747710ffffff807d18ffffffc9ffffff80fffffff9ffffff8fffffff8b2b10018ffffff8a7d,s{13,13,1480454988100,1480454988100,0,0,0,97023097876578304,51,0,13} 2016-11-30 05:29:59,351 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Use SIMPLE authentication for service MasterService, sasl=false2016-11-30 05:29:59,352 [org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection]-[DEBUG] Connecting to dtw/127.0.1.1:160002016-11-30 05:30:03,985 [org.apache.hadoop.hbase.client.HBaseAdmin$CreateTableFuture]-[INFO] Created hbase_tbcreate table successput 'row1','cf:cl1','data'Get: keyvalues={row1/cf:cl1/1480455004219/Put/vlen=4/seqid=0}Scan: keyvalues={row1/cf:cl1/1480455004219/Put/vlen=4/seqid=0}2016-11-30 05:30:04,341 [org.apache.hadoop.hbase.client.HBaseAdmin$9]-[INFO] Started disable of hbase_tb2016-11-30 05:30:06,664 [org.apache.hadoop.hbase.client.HBaseAdmin$DisableTableFuture]-[INFO] Disabled hbase_tb2016-11-30 05:30:08,983 [org.apache.hadoop.hbase.client.HBaseAdmin$DeleteTableFuture]-[INFO] Deleted hbase_tbDelete table:hbase_tbsuccess!2016-11-30 05:30:08,993 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] stopping client from cache: org.apache.hadoop.ipc.Client@7a6b653f2016-11-30 05:30:08,993 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] removing client from cache: org.apache.hadoop.ipc.Client@7a6b653f2016-11-30 05:30:08,993 [org.apache.hadoop.ipc.ClientCache]-[DEBUG] stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@7a6b653f2016-11-30 05:30:08,994 [org.apache.hadoop.ipc.Client]-[DEBUG] Stopping client

至此所有MyElipse开发HBase应用程序内容就讲完了。希望对初学者有帮助!

1 0