hadoop2.8集群搭建(单namenode)

来源:互联网 发布:清博大数据公司 怎么样 编辑:程序博客网 时间:2024/05/17 23:05
----------------------------------------------------------------------------------------------------------------------------
环境:
系统:centos7.1
java:jdk1.8
hadoop:2.8.0
zookeeper:3.4.8
三台地址分配
192.168.1.66 hadoop-1 namenode
192.168.1.57 hadoop-2 datanode
192.168.1.58 hadoop-3 datanode
----------------------------------------------------------------------------------------------------------------------------
设置主机名
做好用户hadoop免密登陆
做好主机名解析
-------------------------------------------------------node1,2,3--------------------------------------------------------
/etc/profile配置java和hadoop环境
[root@hadoop-1 ~]# vim /etc/profile
#--------------jdk---------------------
export JAVA_HOME=/jdk1.8
export JRE_HOME=/jdk1.8/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
#-------------hadoop------------------
export HADOOP_HOME=/hadoop-2.8.0
export PATH=$PATH:$HADOOP_HOME/bin
----------------------------------------------------------------------------------------------------------------------------
加载文件后,查看hadoop版本

[root@hadoop-1 hadoop]# source /etc/profile
[root@hadoop-1 hadoop]# hadoop version
Hadoop 2.8.0
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 91f2b7a13d1e97be65db92ddabc627cc29ac0009
Compiled by jdu on 2017-03-17T04:12Z
Compiled with protoc 2.5.0
From source with checksum 60125541c2b3e266cbf3becc5bda666
This command was run using /hadoop-2.8.0/share/hadoop/common/hadoop-common-2.8.0.jar

----------------------------------------------------------------------------------------------------------------------------
创建相关目录,看结构
[root@hadoop-1 hadoop]# tree /opt/
/opt/
└── hadoop
├── hdfs
│   ├── data
│   └── name
├── tmp
└── var

6 directories, 0 files
----------------------------------------------------------------------------------------------------------------------------
[root@hadoop-1 hadoop]# cat core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:///opt/hadoop/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop-1:9000</value>
<final>true</final>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
</configuration>
----------------------------------------------------------------------------------------------------------------------------
[root@hadoop-1 hadoop]# cat hdfs-site.xml | grep -v ^$
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:////opt/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///opt/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop-1:9001</value>
</property>
</configuration>
----------------------------------------------------------------------------------------------------------------------------
[root@hadoop-1 hadoop]# cat yarn-site.xml | grep -v ^$
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-1:18040</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-1:18030</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>hadoop-1:18088</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-1:18025</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>hadoop-1:18141</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
----------------------------------------------------------------------------------------------------------------------------
[root@hadoop-1 hadoop]# cat master
192.168.1.66
[root@hadoop-1 hadoop]# cat slaves
192.168.1.57
192.168.1.58
----------------------------------------------------------------------------------------------------------------------------
必须设置java路径
[hadoop@hadoop-3 ~]$ vim /hadoop/hadoop-2.8.0/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/hadoop/jdk1.8
----------------------------------------------------------------------------------------------------------------------------
创建hadoop用户,并赋予权限
[root@hadoop-1 hadoop]# useradd hadoop
[root@hadoop-1 hadoop]# chown -R hadoop:hadoop /hadoop/hadoop-2.8.0/
[root@hadoop-1 hadoop]# chown -R hadoop:hadoop /opt/hadoop
-----------------------------------------------------node1---------------------------------------------------------------
在”Hadoop-1”上使用普通用户hadoop进行操作。(备注:只需一次,下次启动不再需要格式化,只需 start-all.sh) 
[hadoop@hadoop-1 ~]$ hadoop namenode –format
----------------------------------------------------------------------------------------------------------------------------
在master启动hadoop
[hadoop@hadoop-1 ~]$ /hadoop/hadoop-2.8.0/sbin/start-all.sh
----------------------------------------------------------------------------------------------------------------------------
查看集群状态
[hadoop@hadoop-1 logs]$ hadoop dfsadmin -report
----------------------------------------------------------------------------------------------------------------------------
访问http://192.168.1.66:50070

原创粉丝点击