Hadoop 2环境配置

来源:互联网 发布:自媒体是什么 知乎 编辑:程序博客网 时间:2024/06/05 08:48
etc/hadoop目录下的配置文件core-site.xml
<configuration>    <property>        <name>fs.default.name</name>        <value>hdfs://localhost:9000</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/home/fd/tmp</value>   </property></configuration>
hdfs-site.xml
<configuration>  <property>    <name>dfs.replication</name>    <value>1</value>  </property>  <property>    <name>dfs.namenode.name.dir</name>    <value>file:/home/fd/namenode</value>  </property>  <property>    <name>dfs.datanode.data.dir</name>    <value>file:/home/fd/datanode</value>  </property></configuration>
mapred-site.xml
<configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property></configuration>
yarn-site.xml
<configuration>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>    </property>    <property>        <name>yarn.resourcemanager.resource-tracker.address</name>        <value>hadoop1:8031</value>    </property>   <property>       <name>yarn.resourcemanager.scheduler.address</name>     <value>hadoop1:8030</value>    </property></configuration>
yarn-env.sh加上java home初始化,输入命令,bin/hdfs namenode -format全部启动sbin/start-all.sh,也可以分开sbin/start-dfs.sh、sbin/start-yarn.sh或者更详细hadoop-daemon.sh start namenodehadoop-daemon.sh start datanodeyarn-daemon.sh start resourcemanageryarn-daemon.sh start nodemanager启动后可以从50070和8088端口查看hadoop下载地址http://mirror.bit.edu.cn/apache/hadoop/common/
0 0
原创粉丝点击