基于hadoop2.7.3搭建多机环境(YARN+HA)

来源:互联网 发布:vr开发用什么编程语言 编辑:程序博客网 时间:2024/03/29 02:29

第一:环境说明

  1. parallels desktop
  2. CentOS-6.5-x86_64-bin-DVD1.iso
  3. jdk-7u79-linux-x64.tar.gz
  4. Hadoop-2.7.3.tar.gz
  5. 搭建四个节点的集群。他们的hostname分布为hadoopA,hadoopB,hadoopC,hadoopD。其中hadoopA的角色为Activity namnode。hadoopB的角色为standby namenode,datanode,journalnode。hadoopC的角色为datanode,journalnode。hadoopD的角色为datanode,journalnode。

第二:操作系统配置

  1. 赋予hadoop用户sudo权限
[root@hadoopa hadoop]# visudo## Allow root to run any commands anywhereroot    ALL=(ALL)       ALLhadoop  ALL=(ALL)       ALL
  1. 修改hostname
[hadoop@hadoopa hadoop-2.7.3]$ cat /etc/hosts127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.201 hadoopA192.168.1.202 hadoopB192.168.1.203 hadoopC192.168.1.204 hadoopD

第三:安装和配置jdk

分别在hadoopA,hadoopB,hadoopC,hadoopD四个节点安装jdk。

[hadoop@hadoopb ~]$ tar -zxvf jdk-7u79-linux-x64.tar.gz

修改jdk的名称

[hadoop@hadoopb ~]$ mv jdk1.7.0_79/  jdk1.7

第四:安装和配置hadoop

  1. 在hadoopA,hadoopB,hadoopC,hadoopD四个节点上解压hadoop
[hadoop@hadoopb ~]$ tar -zxvf hadoop-2.7.3.tar.gz
  1. 在hadoopA上配置hadoop-env.sh
# The java implementation to use.export JAVA_HOME=/home/hadoop/jdk1.7
  1. 在hadoopA上配置core-site.xml
<configuration>        <property>                <name>fs.defaultFS</name>                <value>hdfs://hadoopA:8020</value>        </property></configuration>
  1. 在hadoopA配置hdfs-site.xml
<configuration><property>  <name>dfs.nameservices</name>  <value>hadoop-test</value>  <description>    Comma-separated list of nameservices.  </description></property><property>  <name>dfs.ha.namenodes.hadoop-test</name>  <value>nn1,nn2</value>  <description>    The prefix for a given nameservice, contains a comma-separated    list of namenodes for a given nameservice (eg EXAMPLENAMESERVICE).  </description></property><property>  <name>dfs.namenode.rpc-address.hadoop-test.nn1</name>  <value>hadoopA:8020</value>  <description>    RPC address for nomenode1 of hadoop-test  </description></property><property>  <name>dfs.namenode.rpc-address.hadoop-test.nn2</name>  <value>hadoopB:8020</value>  <description>    RPC address for nomenode2 of hadoop-test  </description></property><property>  <name>dfs.namenode.http-address.hadoop-test.nn1</name>  <value>hadoopA:50070</value>  <description>    The address and the base port where the dfs namenode1 web ui will listen on.  </description></property><property>  <name>dfs.namenode.http-address.hadoop-test.nn2</name>  <value>hadoopB:50070</value>  <description>    The address and the base port where the dfs namenode2 web ui will listen on.  </description></property><property>  <name>dfs.namenode.name.dir</name>  <value>file:///home/hadoop/hdfs/name</value>  <description>Determines where on the local filesystem the DFS name node      should store the name table(fsimage).  If this is a comma-delimited list      of directories then the name table is replicated in all of the      directories, for redundancy. </description></property><property>  <name>dfs.namenode.shared.edits.dir</name>  <value>qjournal://hadoopB:8485;hadoopC:8485;hadoopD:8485/hadoop-test</value>  <description>A directory on shared storage between the multiple namenodes  in an HA cluster. This directory will be written by the active and read  by the standby in order to keep the namespaces synchronized. This directory  does not need to be listed in dfs.namenode.edits.dir above. It should be  left empty in a non-HA cluster.  </description></property><property>  <name>dfs.datanode.data.dir</name>  <value>file:///home/hadoop/hdfs/data</value>  <description>Determines where on the local filesystem an DFS data node  should store its blocks.  If this is a comma-delimited  list of directories, then data will be stored in all named  directories, typically on different devices.  Directories that do not exist are ignored.  </description></property><property>  <name>dfs.ha.automatic-failover.enabled</name>  <value>false</value>  <description>    Whether automatic failover is enabled. See the HDFS High    Availability documentation for details on automatic HA    configuration.  </description></property><property>  <name>dfs.journalnode.edits.dir</name>  <value>/home/hadoop/hdfs/journal/</value></property></configuration>
  1. 在hadoopA配置mapred-site.xml
<configuration><property><name>mapreduce.framework.name</name><value>yarn</value></property><property><name>mapreduce.jobhistory.address</name><value>hadoopB:10020</value></property><property><name>mapreduce.jobhistory.webapp.address</name><value>hadoopB:19888</value></property></configuration>
  1. 在hadoopA配置yarn-site.xml
<configuration>  <!-- Resource Manager Configs -->  <property>    <description>The hostname of the RM.</description>    <name>yarn.resourcemanager.hostname</name>    <value>hadoopA</value>  </property>  <property>    <description>The address of the applications manager interface in the RM.</description>    <name>yarn.resourcemanager.address</name>    <value>${yarn.resourcemanager.hostname}:8032</value>  </property>  <property>    <description>The address of the scheduler interface.</description>    <name>yarn.resourcemanager.scheduler.address</name>    <value>${yarn.resourcemanager.hostname}:8030</value>  </property>  <property>    <description>The http address of the RM web application.</description>    <name>yarn.resourcemanager.webapp.address</name>    <value>${yarn.resourcemanager.hostname}:8088</value>  </property>  <property>    <description>The https adddress of the RM web application.</description>    <name>yarn.resourcemanager.webapp.https.address</name>    <value>${yarn.resourcemanager.hostname}:8090</value>  </property>  <property>    <name>yarn.resourcemanager.resource-tracker.address</name>    <value>${yarn.resourcemanager.hostname}:8031</value>  </property>  <property>    <description>The address of the RM admin interface.</description>    <name>yarn.resourcemanager.admin.address</name>    <value>${yarn.resourcemanager.hostname}:8033</value>  </property>  <property>    <description>The class to use as the resource scheduler.</description>    <name>yarn.resourcemanager.scheduler.class</name>    <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>  </property>  <property>    <description>fair-scheduler conf location</description>    <name>yarn.scheduler.fair.allocation.file</name>    <value>/home/hadoop/hadoop-2.7.3/etc/hadoop/fairscheduler.xml</value>  </property>  <property>    <description>List of directories to store localized files in. An      application's localized file directory will be found in:      ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.      Individual containers' work directories, called container_${contid}, will      be subdirectories of this.   </description>    <name>yarn.nodemanager.local-dirs</name>    <value>/home/hadoop/yarn/local</value>  </property>  <property>    <description>Whether to enable log aggregation</description>    <name>yarn.log-aggregation-enable</name>    <value>true</value>  </property>  <property>    <description>Where to aggregate logs to.</description>    <name>yarn.nodemanager.remote-app-log-dir</name>    <value>/tmp/logs</value>  </property>  <property>    <description>Amount of physical memory, in MB, that can be allocated    for containers.</description>    <name>yarn.nodemanager.resource.memory-mb</name>    <value>8720</value>  </property>  <property>    <description>Number of CPU cores that can be allocated    for containers.</description>    <name>yarn.nodemanager.resource.cpu-vcores</name>    <value>2</value>  </property>  <property>    <description>the valid service name should only contain a-zA-Z0-9_ and can not start with numbers</description>    <name>yarn.nodemanager.aux-services</name>    <value>mapreduce_shuffle</value>  </property></configuration>
  1. 在hadoopA配置fairscheduler.xml
<allocations>  <queue name="infrastructure">    <minResources>102400 mb, 50 vcores </minResources>    <maxResources>153600 mb, 100 vcores </maxResources>    <maxRunningApps>200</maxRunningApps>    <minSharePreemptionTimeout>300</minSharePreemptionTimeout>    <weight>1.0</weight>    <aclSubmitApps>root,yarn,search,hdfs</aclSubmitApps>  </queue>   <queue name="tool">      <minResources>102400 mb, 30 vcores</minResources>      <maxResources>153600 mb, 50 vcores</maxResources>   </queue>   <queue name="sentiment">      <minResources>102400 mb, 30 vcores</minResources>      <maxResources>153600 mb, 50 vcores</maxResources>   </queue></allocations>
  1. 在hadoopA配置slaves文件
[root@hadoopa hadoop]# cat slaveshadoopBhadoopChadoopD
  1. 将hadoopA上hadoop的安装目录复制到其它
[hadoop@hadoopa hadoop-2.7.3]$ scp etc/hadoop/* hadoopB://home/hadoop/hadoop-2.7.3/etc/hadoop/[hadoop@hadoopa hadoop-2.7.3]$ scp etc/hadoop/* hadoopC://home/hadoop/hadoop-2.7.3/etc/hadoop/[hadoop@hadoopa hadoop-2.7.3]$ scp etc/hadoop/* hadoopD://home/hadoop/hadoop-2.7.3/etc/hadoop/

第五:启动hadoop

  1. 在各个JournalNode节点上,输入以下命令启动journalnode服务
[hadoop@hadoopb hadoop-2.7.3]$ sbin/hadoop-daemon.sh start journalnode[hadoop@hadoopc hadoop-2.7.3]$ sbin/hadoop-daemon.sh start journalnode[hadoop@hadoopd hadoop-2.7.3]$ sbin/hadoop-daemon.sh start journalnode
  1. 在[nn1]上,对其进行格式化,并启动:
[root@hadoopa hadoop-2.7.3]# bin/hdfs namenode -format[root@hadoopa hadoop-2.7.3]# sbin/hadoop-daemon.sh start namenode
  1. 在[nn2]上,同步nn1的元数据信息
[hadoop@hadoopb hadoop-2.7.3]$ bin/hdfs namenode -bootstrapStandby
  1. 在[nn2]上,启动NameNode:
[hadoop@hadoopb hadoop-2.7.3]$ sbin/hadoop-daemon.sh start namenode(经过以上四步操作,nn1和nn2均处理standby状态)
  1. 在[nn1]上,将NameNode切换为Active
[root@hadoopa hadoop-2.7.3]# bin/hdfs haadmin -transitionToActive nn1
  1. 在[nn1]上,启动所有datanode
[root@hadoopa hadoop-2.7.3]# sbin/hadoop-daemons.sh start datanode
  1. 启动yarn:在[nn1]上,输入以下命令
[root@hadoopa hadoop-2.7.3]# sbin/start-yarn.sh
  1. 关闭Hadoop集群:在[nn1]上,输入以下命令
[root@hadoopa hadoop-2.7.3]# sbin/stop-dfs.sh[root@hadoopa hadoop-2.7.3]# sbin/stop-yarn.sh

第六:验证hadoop

  1. hadoopA输入命令
[root@hadoopa jdk1.7]# /home/hadoop/jdk1.7/bin/jps10747 -- process information unavailable15583 Jps16576 -- process information unavailable
  1. hadoopB输入命令
[hadoop@hadoopb hadoop-2.7.3]$ /home/hadoop/jdk1.7/bin/jps15709 NodeManager2405 JournalNode11551 NameNode12862 DataNode15398 Jps
  1. hadoopC输入命令
[hadoop@hadoopc ~]$ /home/hadoop/jdk1.7/bin/jps2388 JournalNode13091 Jps13553 DataNode15214 NodeManager
  1. hadoopD输入命令
[hadoop@hadoopd hadoop-2.7.3]$ /home/hadoop/jdk1.7/bin/jps13506 DataNode12675 Jps15334 NodeManager2570 JournalNode

打开浏览器输入以下地址:

http://192.168.1.201:50070/dfshealth.html#tab-overviewhttp://192.168.1.202:50070/dfshealth.html#tab-overviewhttp://192.168.1.201:8088/cluster/scheduler

第七:关闭hadoop

  1. 关闭Hadoop集群:在[nn1]上,输入以下命令
[root@hadoopa hadoop-2.7.3]# sbin/stop-dfs.sh[root@hadoopa hadoop-2.7.3]# sbin/stop-yarn.sh

第八:特别说明

说明:
步骤2:在[nn1]上,对其进行格式化,并启动:
bin/hdfs namenode -fromal
步骤3:在[nn2]上,同步nn1的元数据信息
bin/hdfs namenode -bootstrapStandby

这两步操作,只是在第一次建立集群的时候才使用
下次重启节点,是不需要操作这两步

0 0
原创粉丝点击