zookeeper和hbase安装

来源:互联网 发布:淘宝售后职责 编辑:程序博客网 时间:2024/05/21 17:03

前提:已经安装有hadoop集群

装zookeeper-3.4.5-cdh5.5.2:

hadoop@h21:~$ tar -zxvf zookeeper-3.4.5-cdh5.5.2.tar.gz

将zookeeper-3.4.5/conf目录下面的 zoo_sample.cfg修改为zoo.cfg
hadoop@h21:~$ cd zookeeper-3.4.5-cdh5.5.2/conf/
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2/conf$ mv zoo_sample.cfg zoo.cfg
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2/conf$ vi zoo.cfg 

添加:

dataDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/datadataLogDir=/home/hadoop/zookeeper-3.4.5-cdh5.5.2/logserver.1=192.168.8.21:2888:3888server.2=192.168.8.22:2888:3888server.3=192.168.8.23:2888:3888

注意:原配置文件中已经有

tickTime=2000
initLimit=10
syncLimit=5

clientPort=2181


***2888端口号是zookeeper服务之间通信的端口,而3888是zookeeper与其他应用程序通信的端口

创建目录
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2$ mkdir -pv data log

拷贝给所有节点
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2$ scp -r /home/hadoop/zookeeper-3.4.5-cdh5.5.2/ h22:/home/hadoop
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2$ scp -r /home/hadoop/zookeeper-3.4.5-cdh5.5.2/ h23:/home/hadoop

在节点1上设置myid为1,节点2上设置myid为2,节点3上设置myid为3
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2$ vi data/myid
1
hadoop@h22:~/zookeeper-3.4.5-cdh5.5.2$ vi data/myid
2
hadoop@h23:~/zookeeper-3.4.5-cdh5.5.2$ vi data/myid
3

所有节点都切换到root用户,/var 目录有其他用户写权限
root@h21:~# chmod 777 /var
root@h22:~# chmod 777 /var
root@h23:~# chmod 777 /var

hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2/bin$ ./zkServer.sh start

JMX enabled by defaultUsing config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfgStarting zookeeper ... ./zkServer.sh: line 103: [: /tmp/zookeeper: binary operator expected(正常的话应该不能有这一行)STARTED
hadoop@h22:~/zookeeper-3.4.5-cdh5.5.2/bin$ ./zkServer.sh start
hadoop@h23:~/zookeeper-3.4.5-cdh5.5.2/bin$ ./zkServer.sh start

分别在3个节点上查看状态
hadoop@h21:~/zookeeper-3.4.5-cdh5.5.2/bin$ ./zkServer.sh status
hadoop@h22:~/zookeeper-3.4.5-cdh5.5.2/bin$ ./zkServer.sh status
hadoop@h23:~/zookeeper-3.4.5-cdh5.5.2/bin$ ./zkServer.sh status
(你会发现h21和h23是follower,h22是leader,你会有所疑问h23的myid不是最大他为什么不是leader呢?
zookeeper是两个两个比较,当h21和h22比较时h22的myid大就将h22选举为leader了,就不和后面的h23做比较了。。。)

注意:在Centos和RedHat中一定要将防火墙和selinux关闭,否则在查看状态的时候会报这个错:

JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5-cdh5.5.2/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.


安装hbase-1.0.0-cdh5.5.2:
hadoop@h21:~$ tar -zxvf hbase-1.0.0-cdh5.5.2.tar.gz
hadoop@h21:~$ cd hbase-1.0.0-cdh5.5.2/
hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ vi conf/hbase-env.sh
添加
export JAVA_HOME="/usr/jdk1.7.0_25"
export HBASE_MANAGES_ZK=false  //不使用自带的zookeeper

hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ vi  conf/hbase-site.xml
添加

<property><name>hbase.rootdir</name><value>hdfs://h21:9000/hbase</value></property><property><name>hbase.cluster.distributed</name><value>true</value></property><property><name>hbase.zookeeper.quorum</name><value>h21,h22,h23</value></property><property><name>hbase.zookeeper.property.dataDir</name><value>/home/hadoop/hbase-1.0.0-cdh5.5.2/data</value></property><property>    <name>hbase.tmp.dir</name>    <value>/home/hadoop/hbase-1.0.0-cdh5.5.2/tmp</value></property>

hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ mkdir data
hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ mkdir tmp

hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ vi conf/regionservers
h22
h23

拷贝给其他两个节点
hadoop@h21:~$ scp -r hbase-1.0.0-cdh5.5.2/ h22:/home/hadoop
hadoop@h21:~$ scp -r hbase-1.0.0-cdh5.5.2/ h23:/home/hadoop

hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ bin/start-hbase.sh
(在h22和h23中jps却没有HRegionServer进程,我3台虚拟机都同步了时间并且重启之后才有了)

hadoop@h21:~/hbase-1.0.0-cdh5.5.2$ jps
2810 Jps
2625 QuorumPeerMain
2000 SecondaryNameNode
2696 HMaster
1834 NameNode
2138 ResourceManager

root@h22:~# jps
1408 QuorumPeerMain
1148 DataNode
1562 Jps
1218 NodeManager
1485 HRegionServer

root@h23:~# jps
1460 HRegionServer
1143 DataNode
1589 Jps
1213 NodeManager
1394 QuorumPeerMain

原创粉丝点击