hadoop原生集群搭建

来源:互联网 发布:网盘搬家 知乎 编辑:程序博客网 时间:2024/05/22 07:44

这里以hadoop2.6.4版本为基础搭建的集群

centos的环境配置请参考
centos7环境的配置

虚拟机名称
192.168.47.141 linux11 namenode secordarynamenode datanode
192.168.47.142 linux12 datanode
192.168.47.143 linux 13 datanode

上传hadoop-2.6.4.tar.gz到/opt目录下执行以下命令:tar -xzvf hadoop-2.6.4.tar.gz修改目录文件名为hadoopmv hadoop-2.6.4.tar.gz hadoop

2、修改hadoop的配置文件

进入以下目录中并修改文件内容1、core-site.xml<?xml version="1.0" encoding="UTF-8"?><!-- Put site-specific property overrides in this file. --><configuration>    <property>    <!--指定hadoop所使用的文件系统-->        <name>fs.defaultFS</name>        <value>hdfs://linux11:9000</value>    </property>    <property>    <!--指定各节点上的hadoop进程所在的本地工作目录(父目录)-->        <name>hadoop.tmp.dir</name>        <value>/opt/hadoop_repo/tmp</value>        </property>    <property>        <name>fs.trash.interval</name>        <value>1440</value>    </property></configuration>2、hadoop-env.sh删除以下内容export JAVA_HOME=${JAVA_HOME}添加以下内容#你本地环境的java安装路径export JAVA_HOME=/usr/java/jdk1.8.0_131export HADOOP_LOG_DIR=/opt/hadoop_repo/logs3、hdfs-site.xml<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>    <property>          <name>dfs.namenode.name.dir</name>          <value>file:///opt/hadoop_repo/name</value>      </property>      <property>          <name>dfs.datanode.data.dir</name>          <value>file:///opt/hadoop_repo/data</value>      </property>     <property>      <name>dfs.namenode.checkpoint.dir</name>      <value>file:///opt/hadoop_repo/namesecondary</value>    </property>     <property>        <name>dfs.namenode.secondary.http-address</name>        <value>linux11:9001</value>    </property>    <property>        <name>dfs.replication</name>        <value>2</value>    </property>     <property>          <name>dfs.webhdfs.enabled</name>          <value>true</value>      </property>     <property>        <name>dfs.permissions</name>        <value>false</value>    </property>    <property>        <name>dfs.datanode.max.transfer.threads</name>        <value>4096</value>    </property></configuration>4、mapred-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Put site-specific property overrides in this file. --><configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property>     <property>          <name>mapreduce.jobhistory.address</name>          <value>linux11:10020</value>      </property>      <property>          <name>mapreduce.jobhistory.webapp.address</name>          <value>linux11:19888</value>      </property>     <property>        <name>yarn.app.mapreduce.am.staging-dir</name>        <value>/history</value>    </property>    <property>        <name>mapreduce.jobhistory.done-dir</name>        <value>${yarn.app.mapreduce.am.staging-dir}/history/done</value>    </property>    <property>        <name>mapreduce.jobhistory.intermediate-done-dir</name>        <value>${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate</value>    </property>    <property>        <name>mapreduce.map.log.level</name>        <value>DEBUG</value>    </property>    <property>        <name>mapreduce.reduce.log.level</name>        <value>DEBUG</value>    </property></configuration>5、yarn-env.sh同hadoop-env.sh基本一致添加如下内容export JAVA_HOME=/usr/java/jdk1.8.0_131export HADOOP_LOG_DIR=/opt/hadoop_repo/logs6、yarn-site.xml<?xml version="1.0"?><configuration>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>    <property>        <name>yarn.resourcemanager.hostname</name>        <value>linux11</value>    </property>     <property>          <name>yarn.resourcemanager.address</name>          <value>linux11:8032</value>      </property>      <property>          <name>yarn.resourcemanager.scheduler.address</name>          <value>linux11:8030</value>      </property>      <property>          <name>yarn.resourcemanager.resource-tracker.address</name>          <value>linux11:8031</value>      </property>      <property>          <name>yarn.resourcemanager.admin.address</name>          <value>linux11:8033</value>      </property>      <property>          <name>yarn.resourcemanager.webapp.address</name>          <value>linux11:8088</value>      </property>     <property>          <name>yarn.log-aggregation-enable</name>          <value>true</value>      </property>     <property>         <name>yarn.log.server.url</name>         <value>http://linux11:19888/jobhistory/logs</value>    </property> </configuration>7、slaveslinux11linux12linux13

3、将hadoop复制到各个机器上面

按照前面已经配置了免密码登录scp -rq hadoop linux12:/opt/scp -rq hadoop linux13:/opt/

4、在namenode 目录下到hadoop的bin路径下执行如下命令

hdfs namenode -format启动hdfs集群到sbin目录下start-dfs.sh至此hadoop集群hdfs启动成功关闭hdfsstop-dfs.sh启动yarnstart-yarn.sh关闭yarnstop-yarn.sh以上命令可以用一个命令搞定start-all.sh浏览器访问:http://192.168.47.141:50070/
原创粉丝点击