hadoop的hbase安装入门

来源:互联网 发布:淘宝卖高仿鞋的店铺 编辑:程序博客网 时间:2024/06/05 08:45

前提:接上一篇的hadoop已经成功安装后,此篇介绍如何进行hbase和zookeeper集成。

1:首先下载hbase和zookeeper安装包,然后先解压缩hbase,进入conf目录,首先对hbase-env.sh 进行如下修改:

export JAVA_HOME=/home/jdk1.6.0_13

====后加的

export HBASE_HOME=/home/yf/hbase-0.94.3
export PATH=$PATH:/home/yf/hbase-0.94.3/bin


2:修改hbase-site.xml文件,增加如下内容:

<property>  
    <name>hbase.rootdir</name>  
    <value>hdfs://bida:9100/hbase</value>  
  </property>  
<property>
<name>hbase.tmp.dir</name>
<value>hdfs://bida:9100/tmp</value>
<description>Temporary directory on the local filesystem.</description>
</property>

然后保存,并在bin目录下启动。进行如下测试:

下面测试系统是否正确安装。
1:确认已经启动了hadooop;
2.如果没有 在Hadoop 安装目录下,执行“bin/start-all.sh”脚本,启动Hadoop。
3. 在Hbase 安装目录下,执行“bin/start-hbase.sh”脚本,启动HBase。
4. 在Hbase 安装目录下,执行“bin/hbase shell”,进入Shell 命令模式。
5. 在Shell 中输入“create 'test', 'data'”,执行结果通过输入“list”命令进
行查看。

如下所示:
[root@bida bin]# ./hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.3, r1408904, Wed Nov 14 19:55:11 UTC 2012

hbase(main):001:0> list
TABLE                                                                                                                               
0 row(s) in 0.6520 seconds

hbase(main):002:0> create 'test', 'data'
0 row(s) in 1.2060 seconds

hbase(main):003:0> list
TABLE                                                                                                                               
test                                                                                                                                
1 row(s) in 0.0350 seconds

hbase(main):004:0> put 'test', 'row1', 'data:1', 'value1'
0 row(s) in 0.0850 seconds

hbase(main):005:0> put 'test', 'row2', 'data:2', 'value2'
0 row(s) in 0.0150 seconds

hbase(main):006:0> put 'test', 'row3', 'data:3', 'value3'
0 row(s) in 0.0030 seconds

hbase(main):007:0> scan 'test'
ROW                                COLUMN+CELL                                                                                      
 row1                              column=data:1, timestamp=1358484999214, value=value1                                             
 row2                              column=data:2, timestamp=1358485004710, value=value2                                             
 row3                              column=data:3, timestamp=1358485005165, value=value3                                             
3 row(s) in 0.0680 seconds


另外hbase和hadoop集成好以后,在hadoop目录下可以看到hbase的目录:

[root@bida conf]# cd ../../hadoop-1.0.3/bin
[root@bida bin]# ./hadoop fs -ls /
Warning: $HADOOP_HOME is deprecated.

Found 4 items
drwxr-xr-x   - root supergroup          0 2013-01-18 12:56 /hbase
drwxr-xr-x   - root supergroup          0 2013-01-16 18:53 /home
drwxr-xr-x   - root supergroup          0 2013-01-17 14:29 /tmp
drwxr-xr-x   - root supergroup          0 2013-01-17 14:47 /user
[root@bida bin]# ./hadoop fs -ls /hbase
Warning: $HADOOP_HOME is deprecated.

Found 7 items
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/-ROOT-
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/.META.
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/.logs
drwxr-xr-x   - root supergroup          0 2013-01-18 12:35 /hbase/.oldlogs
-rw-r--r--   1 root supergroup         38 2013-01-18 12:35 /hbase/hbase.id
-rw-r--r--   1 root supergroup          3 2013-01-18 12:35 /hbase/hbase.version
drwxr-xr-x   - root supergroup          0 2013-01-18 12:56 /hbase/test


=================================

hbase-site.xml写的格式不对会出现如下异常:

[Fatal Error] hbase-site.xml:35:2: The markup in the document following the root element must be well-formed.
13/01/18 12:24:28 FATAL conf.Configuration: error parsing conf file: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed.
Exception in thread "main" java.lang.RuntimeException: org.xml.sax.SAXParseException: The markup in the document following the root element must be well-formed.
        at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:1263)
        at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:1129)


以下hbase的logs日志异常问题应该是配置hbase-site.xml的zookeeper配置造成的,建议没有安装zookeeper之前注释掉zookeeper配置:

2013-01-18 12:27:20,962 INFO org.apache.zookeeper.ClientCnxn: Opening socket connection to server bida/127.0.0.1:2181
2013-01-18 12:27:20,963 WARN org.apache.zookeeper.client.ZooKeeperSaslClient: SecurityException: java.lang.SecurityException: Unable to locate a login configuration occurred when trying to find JAAS configuration.
2013-01-18 12:27:20,963 INFO org.apache.zookeeper.client.ZooKeeperSaslClient: Client will not SASL-authenticate because the default JAAS configuration section 'Client' could not be found. If you are not using SASL, you may ignore this. On the other hand, if you expected SASL to work, please fix your JAAS configuration.
2013-01-18 12:27:20,963 WARN org.apache.zookeeper.ClientCnxn: Session 0x13c4be8b1cd0004 for server null, unexpected error, closing socket connection and attempting reconnect
java.net.ConnectException: Connection refused
        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:574)
        at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:286)
        at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1035)


13/01/18 12:29:41 ERROR zookeeper.ZooKeeperWatcher: hconnection Received unexpected KeeperException, re-throwing exception
org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss for /hbase/root-region-server
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
        at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1021)
        at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:166)
        at org.apache.hadoop.hbase.zookeeper.ZKUtil.watchAndCheckExists(ZKUtil.java:230)
        at org.apache.hadoop.hbase.zookeeper.ZooKeeperNodeTracker.start(ZooKeeperNodeTracker.java:82)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.ensureZookeeperTrackers(HConnectionManager.java:595)
        at org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.getMaster(HConnectionManager.java:650)
        at org.apache.hadoop.hbase.client.HBaseAdmin.<init>(HBaseAdmin.java:110)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
        at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
        at org.jruby.javasupport.JavaConstructor.newInstanceDirect(JavaConstructor.java:275)
        at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:91)
        at org.jruby.java.invokers.ConstructorInvoker.call(ConstructorInvoker.java:178)
        at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:322)
        at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:178)


以下问题是由于hbase-site.xml配置中的hbase.rootdir和tmp配置与hadoop的hdfs配置不是一样的,不能一个写ip,一个写域名,要统一。如我在hadoop的core-site.xml配置写的是

<property>  
                <name>fs.default.name</name>  
                <value>hdfs://bida:9100</value>  
        <description> 
则在hbase的 hbase-site.xml 配置中也要写成如下:

<property>  
    <name>hbase.rootdir</name>  
    <value>hdfs://bida:9100/hbase</value>  
  </property>  
<property>


2013-01-18 12:32:26,124 FATAL org.apache.hadoop.hbase.master.HMaster: Unhandled exception. Starting shutdown.
java.net.ConnectException: Call to 192.168.9.228/192.168.9.228:9100 failed on connection exception: java.net.ConnectException: Connection refused
        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1099)
        at org.apache.hadoop.ipc.Client.call(Client.java:1075)
        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
        at $Proxy11.getProtocolVersion(Unknown Source)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:396)
        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:379)
        at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:119)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:238)
        at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:203)


另外通过http://192.168.9.228:60010/master-status在浏览器访问可以查看到hbse的相关情况。


原创粉丝点击