hbase集群搭建

来源:互联网 发布:知乎怎样查航班动态 编辑:程序博客网 时间:2024/06/05 19:25

本文主要讲操作,一次部署服务器记录,不对的地方敬请雅正!
准备工作:
机器4台,这里分别使用(151(master),100,101,102)机器。
系统:
CentOS release 6.5 (Final)
查看版本命令
# cat /etc/issue
软件:
jdk1.7.0_79
hadoop-2.6.0
hbase-1.0.1.1
zookeeper-3.4.6
相关软件下载,访问call_wss@qq.com

特别强调:    1.因为是集群,所有每一台配置都一样,我们就从master上配置,然后scp(复制)到其他从属机器上。特殊情况特殊说明~    2.所有操作都是root    3.配置文件比较矫情,有些复制的地方可能会有空格字节码的问题。

2步骤说明:
1.解压
2.机器配置
3.配置环境变量
4.配置hadoop
5.配置zookeeper
6.配置hbase
7.完成

一、将下载好的文件加压到/home/hadoop/software/(可以自己指定)目录下:

二、机器配置
1、ip配置(三台机器都操作)
把网卡IP设置成静态 (NAT方式)
# ifconfig # 查看网卡IP
# vi /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes # 把网卡设置成开机启动
BOOTPROTO=static # 把DHCP改为static
IPADDR=192.168.17.129
NETMASK=255.255.255.0
GATEWAY=192.168.17.2
DNS1=192.168.17.2 #第一个DNS设置成跟网关地址一样
DNS2=202.96.209.5

2、主机名(三台机器都操作)建议第一台:master(主机)第二台:master-slave01第三台:master-slave02第四台:master-slave03# vim /etc/sysconfig/networkNETWORKING=yesHOSTNAME=master3、hosts映射文件(三台机器都操作)# vim /etc/hosts192.168.xx.99  master192.168.xx.100  master-slave01192.168.xx.101  master-slave02192.168.xx.102  master-slave034、关闭iptables和selinux    [root用户] [三台服务器]# service iptables stop     --关闭防火墙服务# chkconfig iptables off    --让iptables开启不启动# vi /etc/sysconfig/selinux SELINUX=disabled若考虑安全问题,可能需要自己开启端口5、ssh无密钥登陆    [三台机器]    # ssh-keygen -t rsa     --一直回车,生成一对公私钥对                 ** /home/beifeng/.ssh/                  id_rsa  id_rsa.pub        --把自己的公钥拷贝给其他三台服务器    # ssh-copy-id master-slave01    # ssh-copy-id master-slave02    # ssh-copy-id master-slave03        6、同步时间    [master]    1、同步时间    # ntpdate cn.pool.ntp.org   --同步当前服务器时间    25 Aug 14:47:41 ntpdate[10105]: step time server 202.112.29.82 offset -9.341897 sec    2、检查软件包    # rpm -qa | grep ntp        --查看ntp软件包是否已安装    ntp-4.2.4p8-3.el6.centos.x86_64    # yum -y install ntp        --如果没有安装需要安装ntp    3、修改ntp配置文件    # vi /etc/ntp.conf     ####去掉下面这行前面的# ,并把网段修改成自己的网段    restrict 192.168.17.0 mask 255.255.255.0 nomodify notrap    ####注释掉以下几行    #server 0.centos.pool.ntp.org    #server 1.centos.pool.ntp.org    #server 2.centos.pool.ntp.org    ####把下面两行前面的#号去掉,如果没有这两行内容,需要手动添加    server  127.127.1.0     # local clock    fudge   127.127.1.0 stratum 10    4、重启ntp服务    # service ntpd start    # chkconfig ntpd on    [master-slave01、master-slave02、master-slave03]    ** 去同步hadoop-senior01这台时间服务器时间    # service ntpd stop     # chkconfig ntpd off    # ntpdate hadoop-senior01.beifeng.com   --去第一台服务器同步时间    25 Aug 15:16:47 ntpdate[2092]: adjust time server 192.168.17.129 offset 0.311666 sec    制定计划任务,周期性同步时间    # crontab -e    */10 * * * * /usr/sbin/ntpdate hadoop-senior01.beifeng.com    [分  时 日  月 星期]    # service crond restart

三、配置环境变量
为了统一管理,讲环境变量同意放到/root/.bashrc
# vim /root/.bashrc
添加如下内容:
export SOFT_HOME=/home/hadoop/software/

    export JAVA_HOME=$SOFT_HOME/jdk1.7.0_79/    export JAVA_JRE=$JAVA_HOME/jre;    export PATH=$PATH:$JAVA_HOME/bin;    export HADOOP_HOME=$SOFT_HOME/hadoop-2.6.0/    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin;    export HBASE_HOME=$SOFT_HOME/hbase-1.0.1.1/    export PATH=$PATH:$HBASE_HOME/bin;    export ZOOKEEPER_HOME=$SOFT_HOME/zookeeper-3.4.6/    export PATH=$PATH:$ZOOKEEPER_HOME/bin;使环境变量生效:# source /root/.bashrc 

四、配置hadoop
hadoop配置主要在hadoop-2.6.0/etc/hadoop/目录下。
主要涉及文件:
core-site.xml
hdfs-site.xml
mapred-site.xml(默认没有该文件,需要从mapred-site.xml.template复制)
yarn-site.xml
slaves
yarn-env.sh
hadoop-env.sh

1、core-site.xml
<property>       <name>fs.trash.interval</name>       <value>10080</value> </property> <property>     <name>fs.defaultFS</name>     <value>hdfs://master:9000</value>     <description></description> </property> <property>     <name>hadoop.tmp.dir</name>     <value>/home/hadoop/temp/hadoop/</value>     <description>a base for other temporary directories.</description> </property> <property>     <name>ha.zookeeper.quorum</name>     <value>master-slave03,master-slave02,master-slave01</value>     <description></description> </property><property>   <name>io.compression.codecs</name>   <value>     org.apache.hadoop.io.compress.BZip2Codec,   </value> </property>
2、hdfs-site.xml
<property>     <name>dfs.namenode.name.dir</name>     <value>/home/hadoop/software/hadoop-2.6.0/name</value>     <description></description> </property> <property>     <name>dfs.datanode.data.dir</name>     <value>/data1,/data2,/data3</value>     <description></description> </property> <!--######--><property>     <name>dfs.namenode.secondary.http-address</name>     <value>master:50090</value> </property><!-- <property>     <name>dfs.namenode.shared.edits.dir</name>     <value>qjournal://master-slave03:8485;master-slave02:8485;master-slave01:8485/cluster1</value>     <description></description> </property> --> <property>     <name>dfs.journalnode.edits.dir</name>     <value>/home/hadoop/temp/journal</value>     <description></description> </property> <property>     <name>dfs.client.failover.proxy.provider.cluster1</name>     <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>     <description></description> </property> <property>     <name>dfs.ha.fencing.methods</name>     <value>         sshfence         <!--shell-->     </value>     <description></description> </property> <property>     <name>dfs.ha.fencing.ssh.private-key-files</name>     <value>/root/.ssh/id_rsa</value>     <description></description> </property> <property>     <name>dfs.ha.fencing.ssh.connect-timeout</name>     <value>30000</value>     <description></description> </property> <!--######--> <property>     <name>dfs.permissions.enabled</name>     <value>false</value>     <description></description> </property> <property>     <name>dfs.datanode.max.transfer.threads</name>     <value>8192</value>     <description></description> </property> <property>    <name>dfs.block.size</name>    <value>256m</value>    <description>块大小字节。可以使用后缀:k,m,g,t,p,e指定大小(1g, 等待,默认为128mb)</description> </property>  <property>     <name>dfs.replication</name>     <value>3</value>     <description></description> </property> <property>     <name>dfs.namenode.replication.min</name>     <value>1</value>     <description></description> </property> <property>    <name>dfs.replication.max</name>    <value>30</value>    <description>最大块副本数,不要大于节点总数</description> </property> <property>     <name>dfs.webhdfs.enabled</name>     <value>true</value>     <description></description> </property> <property>     <name>dfs.datanode.socket.write.timeout</name>     <value>3000000</value> </property> <property>     <name>dfs.socket.timeout</name>     <value>3000000</value> </property> <property>     <name>dfs.datanode.handler.count</name>     <value>20</value> </property> <property>     <name>dfs.namenode.handler.count</name>     <value>60</value></property>
3、mapred-site.xml(默认没有该文件,需要从mapred-site.xml.template复制)
<property>    <name>mapreduce.framework.name</name>    <value>yarn</value></property><property>    <name>mapreduce.jobhistory.address</name>    <value>master:10020</value></property><property>    <name>mapreduce.jobhistory.webapp.address</name>    <value>master:19888</value></property>
4、yarn-site.xml
<property>   <name>yarn.resourcemanager.hostname</name>   <value>master</value></property><property>  <name>yarn.nodemanager.aux-services</name>  <value>mapreduce_shuffle</value>  <description>nodemanager节点间数据传输方式</description></property><property> <name>yarn.nodemanager.local-dirs</name> <value>/home/hadoop/software/hadoop-2.6.0/hadoop/yarn/local</value> <description>中间结果存放位置</description></property><property>  <name>yarn.nodemanager.log-dirs</name>  <value>/home/hadoop/software/hadoop-2.6.0/hadoop/yarn/logs</value>  <description>日志存放地址</description></property> <property> <name>yarn.nodemanager.resource.memory-mb</name> <value>98304</value><discription>每个节点可用内存,单位MB containers * RAM-per-container</discription> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>2.1</value> <discription>任务每使用1MB物理内存,最多可使用虚拟内存量,默认是2.1</discription> </property> <property>           <name>yarn.app.mapreduce.am.resource.mb</name>           <value>6144</value>           <discription>2 * RAM-per-container</discription>   </property>   <property>           <name>yarn.app.mapreduce.am.command-opts</name>           <value>-Xmx4906m</value>           <discription>0.8 * 2 * RAM-per-container</discription>   </property>   <property>           <name>yarn.scheduler.minimum-allocation-mb</name>           <value>3072</value>           <discription>单个任务可申请最少内存,默认1024MB   RAM-per-container</discription>   </property>   <property>           <name>yarn.scheduler.maximum-allocation-mb</name>           <value>40960</value>           <discription>单个任务可申请最大内存,默认8192MB containers * RAM-per-container</discription>   </property>  <property><name>mapreduce.map.memory.mb</name><value>3072</value><description>每个Map任务的物理内存限制</description></property><property><name>mapreduce.map.java.opts</name><value>-Xmx2450m</value></property><property><name>mapreduce.reduce.memory.mb</name><value>6144</value><description>每个Reduce任务的物理内存限制</description></property><property><name>mapreduce.reduce.java.opts</name><value>-Xmx4906m</value></property>   <property><name>mapreduce.task.io.sort.mb</name><value>600</value><description>任务内部排序缓冲区大小</description></property> <property> <name>yarn.scheduler.minimum-allocation-vcores</name> <value>1</value> <discription>单个任务可申请的最小虚拟CPU个数,默认是1</discription>  </property>  <property>    <name>yarn.scheduler.maximum-allocation-vcores</name>    <value>20</value><discription>单个任务可申请的最多虚拟CPU个数,默认是32</discription>  </property>  <property>    <name>yarn.nodemanager.resource.cpu-vcores</name>    <value>24</value>  <discription>YARN可使用的虚拟CPU个数,默认是8,注意,目前推荐将该值设值为与物理CPU核数数目相同</discription> </property><property>   <name>yarn.nodemanager.resource.percentage-physical-cpu-limit</name>   <value>80</value>  <discription>Percentage of CPU that can be allocated for containers. This setting allows users to limit the amount of CPU that YARN containers use. Currently functional only on Linux using cgroups. The default is to use 100% of CPU. Default:100</discription></property>
5、slaves
master2-slave01master2-slave02master2-slave03
6、yarn-env.sh    指定JAVA_HOME目录    export JAVA_HOME=/home/hadoop/software/jdk1.7.0_79/7、hadoop-env.sh    指定JAVA_HOME目录    export JAVA_HOME=/home/hadoop/software/jdk1.7.0_79/8、配置完成    将配置好的文件复制到其他机器上    scp /home/hadoop/software/hadoop-2.6.0 master-slave01:/home/hadoop/software/    scp /home/hadoop/software/hadoop-2.6.0 master-slave02:/home/hadoop/software/    scp /home/hadoop/software/hadoop-2.6.0 master-slave03:/home/hadoop/software/9、启动集群    格式化namenode    [root@master hadoop-2.6.0]# bin/hdfs namenode -format    启动hdfs    [root@master hadoop-2.6.0]# sbin/start-dfs.sh    查看成功状态:    [root@master hadoop-2.6.0]# jps    jps    NameNode    SecondaryNameNode    [root@master-slave01 hadoop-2.6.0]# jps    jps    DataNode    [root@master-slave02 hadoop-2.6.0]# jps    jps    DataNode    [root@master-slave03 hadoop-2.6.0]# jps    jps    DataNode    至此hadoop启动成功    网页上输入:http://192.168.xx.99:50070/

五、配置zookeeper
/home/hadoop/software/zookeeper-3.4.6/conf
$ cp -a zoo_sample.cfg zoo.cfg
修改配置文件:
dataDir=/home/hadoop/software/zkData

server.1=master-slave01server.2=master-slave02server.3=master-slave03
$ mkdir /home/hadoop/software/zkData添加myid文件    ** 一定要在linux里面创建    $ vi zkData/myid     1    分别对应        server.1        server.2        server.3将配置好的文件复制到其他机器上scp /home/hadoop/software/zookeeper-3.4.6 master-slave01:/home/hadoop/software/scp /home/hadoop/software/zookeeper-3.4.6 master-slave02:/home/hadoop/software/scp /home/hadoop/software/zookeeper-3.4.6 master-slave03:/home/hadoop/software/启动:    分别在配置的        server.1=master-slave01        server.2=master-slave02        server.3=master-slave03    机器上启动    [root@master-slave01 zookeeper-3.4.6]#  bin/zkServer.sh start    [root@master-slave02 zookeeper-3.4.6]#  bin/zkServer.sh start    [root@master-slave03 zookeeper-3.4.6]#  bin/zkServer.sh start

六、配置hbase
hbase配置2个配置文件
[root@master hbase-1.0.1.1]# vim conf/hbase-site.xml

<!--master processer server and port--><property>    <name>hbase.master.port</name>    <value>60000</value></property><!--clock--><property>    <name>hbase.master.maxclockskew</name>    <value>180000</value></property><!--hdfs path for hbase data--><property>    <name>hbase.rootdir</name>    <value>hdfs://master:9000/hbase</value></property><!--run mode is distributed--><property>    <name>hbase.cluster.distributed</name>    <value>true</value></property><!--zookeeper quorum server list,the num must is odd--><property>     <name>hbase.zookeeper.quorum</name>    <value>master-slave03,master-slave02,master-slave01</value></property><!--the path as same an zoo.cfg--><property>    <name>hbase.zookeeper.property.dataDir</name>    <value>/zookeeper</value></property><!--data replication num--><property>    <name>dfs.replication</name>    <value>2</value></property><!--self define hbase coprocessor--><!--property>    <name>hbase.coprocessor.region.classes</name>    <value>org.apache.hbase.kora.coprocessor.RegionObserverExample</value></property--><property>    <name>hbase.master.info.port</name>    <value>60800</value>    <description>The port for the HBase Master web UI.        Set to -1 if you do not want a UI instance run.    </description></property><property>    <name>hbase.wal.provider</name>    <value>multiwal</value></property><property>    <name>hbase.master.distributed.log.splitting</name>    <value>false</value>    <description></description></property><property>    <name>hbase.hlog.split.skip.errors</name>    <value>true</value></property><property>    <name>hbase.regionserver.handler.count</name>    <value>200</value>    <description></description></property><property>    <name>hfile.block.cache.size</name>    <value>0.25</value>    <description></description></property><property>    <name>hbase.bucketcache.ioengine</name>    <value>offheap</value></property><property>    <name>hbase.bucketcache.size</name>    <value>3072</value></property><property>    <name>hbase.hregion.max.filesize</name>    <value>107374182400</value></property><property>    <name>hbase.hregion.memstore.flush.size</name>    <value>268435456</value></property><property>    <name>dfs.datanode.socket.write.timeout</name>    <value>3000000</value></property><property>    <name>dfs.socket.timeout</name>    <value>180000</value></property><property>    <name>hbase.lease.recovery.timeout</name>    <value>1800000</value></property><property>    <name>hbase.lease.recovery.dfs.timeout</name>    <value>128000</value></property><property>    <name>fail.fast.expired.active.master</name>    <value>true</value>    <description>If abort immediately for the expired master without trying  to recover its zk session.</description></property><property>    <name>hbase.master.wait.on.regionservers.mintostart</name>    <value>3</value>    <description></description></property><property>    <name>hbase.hstore.compactionThreshold</name>    <value>6</value>    <description></description></property><property>    <name>hbase.regionserver.thread.compaction.small</name>    <value>5</value></property><property>    <name>hbase.regionserver.thread.compaction.large</name>    <value>5</value></property><property>    <name>hbase.hstore.blockingStoreFiles</name>    <value>100</value></property><property>    <name>hbase.hregion.majorcompaction</name>    <value>0</value></property><!--property>      <name>hbase.regionserver.codecs</name>      <value>snappy</value></property--> <property>    <name>hbase.block.data.cachecompressed</name>    <value>true</value></property>
[root@master hbase-1.0.1.1]#  vim conf/regionservers
master-slave01master-slave02master-slave03
启动:    [root@master hbase-1.0.1.1]#  bin/start-hbase.sh 

七、完成