hadoop集群安装

来源:互联网 发布:淘宝开店支付宝提现 编辑:程序博客网 时间:2024/05/22 10:38
 

分布式安装
修改IP地址和主机名(主机名可以不用修改)
hadoop1.com
192.168.213.146
192.168.213.255
255.255.255.0
hadoop2.com
192.168.213.147
192.168.213.255
255.255.255.0
hadoop3.com
192.168.213.148
192.168.213.255
255.255.255.0


配置主机名
vim /etc/sysconfig/network
主机名与Ip的对应关系
vim /etc/hosts
关闭防火墙
service iptables status
service iptables stop
chkconfig iptables off

vim /etc/sysconfig/selinux
disabled
重启计算机
ssh登录
ssh-keygen -t rsa
四个回车
拷贝
ssh-copy-id 192.168.213.143

[root@hadoop01 ~]# cd /home/
[root@hadoop01 home]# rm -rf ./*
[root@hadoop01 home]# ls
[root@hadoop01 home]# mkdir softwares
[root@hadoop01 home]# mkdir tools
[root@hadoop01 home]# mkdir datas
[root@hadoop01 home]# cd tools/
[root@hadoop01 tools]# yum install lrzsz -y
tar -zxf jdk-7u40-linux-x64.tar.gz -C ../softwares/
进入目录查看路径
[root@hadoop03 jdk1.8.0_101]# pwd
更改环境变量
[root@hadoop03 jdk1.8.0_101]# vim /etc/profile
配置
export JAVA_HOME=/home/softwares/jdk1.7.0_40
export PATH=$PATH:$JAVA_HOME/bin

是配置生效
[root@hadoop03 jdk1.8.0_101]# source /etc/profile


tar -zxf hadoop-2.7.3-src.tar.gz -C ../softwares/

 

 

 

伪分布式
[root@hadoop01 tools]# tar -zxf protobuf-2.5.0.tar.gz -C ../softwares/
[root@hadoop01 tools]# tar -zxf apache-maven-3.0.5-bin.tar.gz -C ../softwares/
[root@hadoop01 tools]# tar -zxf findbugs-1.3.9.tar.gz -C ../softwares/

配置环境变量
[root@hadoop01 home]# cd softwares/
[root@hadoop01 softwares]# cd apache-maven-3.0.5/
[root@hadoop01 apache-maven-3.0.5]# pwd
/home/softwares/apache-maven-3.0.5

 

vim /etc/profile
export JAVA_HOME=/home/softwares/jdk1.8.0_101
export MAVEN_HOME=/home/softwares/apache-maven-3.0.5
export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin

 

是配置生效
source /etc/profile

 

配置 findbugs
export JAVA_HOME=/home/softwares/jdk1.8.0_101
export MAVEN_HOME=/home/softwares/apache-maven-3.0.5
export FINDBUGS_HOME=/home/softwares/findbugs-1.3.9
export PATH=$PATH:$JAVA_HOME/bin:$MAVEN_HOME/bin:$FINDBUGS_HOME/bin
是配置生效
source /etc/profile

 

查看配置是否完成
findbugs -version
安装 protobuf-2.5.0
[root@hadoop01 findbugs-1.3.9]# cd ../protobuf-2.5.0/
[root@hadoop01 protobuf-2.5.0]# ./configure

 

make install 提示无法安装
系统依赖:
yum -y install autoconf automake libtool cmake ncurses-devel openssl-devel lzo-devel zlib-devel gcc gcc-c++
再次安装
[root@hadoop01 protobuf-2.5.0]# ./configure
[root@hadoop01 protobuf-2.5.0]# make install

 

编译hadoop
[root@hadoop01 hadoop-2.7.3-src]# mvn package -Pdist,native -DskipTests -Dtar
大概30分钟,取决于网速

 


解压安装包
cd hadoop-2.7.1/etc/hadoop/
vim hadoop-env.sh
改变java环境变量,为路径;

 

创建目录
[root@hadoop01 hadoop-2.7.1]# mkdir data
[root@hadoop01 hadoop-2.7.1]# cd data
[root@hadoop01 data]# mkdir tmp
[root@hadoop01 data]# cd tmp
[root@hadoop01 tmp]# pwd
/home/softwares/hadoop-2.7.1/data/tmp

 


cd etc/hadoop
配制文件
vim core-site.xml

 

加入
<configuration>
<property>
<name>fs.defaultFs</name>
<value>hdfs://192.168.213.140:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/softwares/hadoop-2.7.1/data/tmp</value>
</property>
</configuration>

 

 

 


[root@hadoop01 hadoop]# vim hdfs-site.xml

 

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</configuration>
设置备份数,默认三份

 

格式画文件系统
[root@hadoop01 hadoop-2.7.1]# bin/hdfs namenode -format
启动hadoop
[root@hadoop01 hadoop-2.7.1]# sbin/start-dfs.sh
若启动报错改变如下配置
etc/hadoop/core-site.xml中增加如下配置

 

<property>
<name>fs.default.name</name>
<value>hdfs://127.0.0.1:9000</value>
</property>

 


在浏览器
http://192.168.213.140:50070/
打开成功

 

mv mapred-site.xml.template mapred-site.xml
vim mapred-site.xml

 


<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

 

 

 

[root@hadoop01 hadoop]# vim yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>

 

<!-- Site specific YARN configuration properties -->
</property>
</configuration>

 

启动
[root@hadoop01 hadoop-2.7.1]# sbin/start-yarn.sh

 

http://192.168.213.140:8088/
开始测试
在 ect/home/data目录下
[root@hadoop01 data]# touch words
[root@hadoop01 data]# vim words
Hello a
Hello b
统计单词出现的个数
上传文件开始测试
[root@hadoop01 hadoop-2.7.1]# bin/hadoop fs -put /home/data/words /words

 

上传成功后输入命令开始统计
[root@hadoop01 hadoop-2.7.1]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /words /out
等待执行完成

 


[root@hadoop01 hadoop-2.7.1]# bin/hadoop fs -ls /
Found 3 items
drwxr-xr-x - root supergroup 0 2016-10-07 12:05 /out
drwx------ - root supergroup 0 2016-10-07 12:02 /tmp
-rw-r--r-- 1 root supergroup 16 2016-10-07 11:51 /words
[root@hadoop01 hadoop-2.7.1]# bin/hadoop fs -ls /out
Found 2 items
-rw-r--r-- 1 root supergroup 0 2016-10-07 12:05 /out/_SUCCESS
-rw-r--r-- 1 root supergroup 16 2016-10-07 12:04 /out/part-r-00000

 

[root@hadoop01 hadoop-2.7.1]# bin/hadoop fs -cat /out/part-r-00000
Hello2
a1
b1
查询完成;

 

工作过程
hdfs原始数据:
hello a
hello b

map阶段:
输入数据:
<0,"hello a">
<8,"hello b">

输出数据:
map(key,value,context) {
String line = value;//hello a
String[] words = value.split("\t");
for(String word : words) {
//hello
// a
// hello
// b
context.write(word,1);
}
}
<hello,1>
<a,1>
<hello,1>
<b,1>

 

reduce阶段(分组排序):
输入数据:
<a,1>
<b,1>
<hello,{1,1}>


输出数据:
reduce(key,value,context) {
int sum = 0;
String word = key;
for(int i : value) {
sum += i;
}
context.write(word,sum);
}

 

 

完全分布式
在伪分布式的基础上
配置core-site.xml中增加如下配置
<configuration>
<property>
<name>fs.defaultFs</name>
<value>hdfs://192.168.213.146:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/softwares/hadoop-2.7.1/data/tmp</value>
</property>

<property>
<name>fs.trash.interval</name>
<value>10080</value>
</property>

</configuration>
hdfs-site.xml中增加如下配置
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.permissions.enabled</name>
<value>false</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>192.168.213.146:50070</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>192.168.213.148:50090</value>
</property>
</configuration>
mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>192.168.213.146:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>192.168.213.146:19888</value>
</property>
<property>
<name>mapreduce.job.ubertask.enable</name>
<value>true</value>
</property>

</configuration>
yarn-site.xml中增加如下配置
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>192.168.213.147</value>
</property>
<!--web application Proxy安全任务 -->
<property>
<name>yarn.web-proxy.address</name>
<value>192.168.213.147:8888</value>
</property>
<!-- 开启日志-->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<property>
<!--日志删除时间-->
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
<!--yarn 内存-->
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>8192</value>
</property>
<property>
<!--yarn cpu-->
<name>yarn.nodemanager.cpu-vcores</name>
<value>8</value>
</property>
</configuration>

slaves.xml中增加如下配置
192.168.213.146
192.168.213.147
192.168.213.148
配置完成
把配置文件拷贝到其它的机器
[root@hadoop01 hadoop-2.7.1]# bin/hadoop namenode -format
[root@hadoop01 softwares]# scp -r hadoop-2.7.1/ 192.168.213.141:/home/softwares/
[root@hadoop01 softwares]# scp -r hadoop-2.7.1/ 192.168.213.142:/home/softwares/

查看第一台jps发现进程启动就不需要启动了,如果没启动就启动
sbin/start-dfs.sh

[root@hadoop02 hadoop-2.7.1]# sbin/start-yarn.sh
[root@hadoop02 hadoop-2.7.1]# jps
4066 Jps
3491 DataNode

 

 

不用启动,会自动增加
[root@hadoop03 hadoop-2.7.1]# jps
3585 SecondaryNameNode
4194 Jps
3524 DataNode
4071 NodeManager

 

[root@hadoop01 hadoop-2.7.1]# jps
4737 DataNode
5157 ResourceManager
4925 SecondaryNameNode
7166 Jps
4639 NameNode
5247 NodeManager

 

[root@hadoop01 hadoop-2.7.1]# sbin/mr-jobhistory-daemon.sh start historyserver


启动防护进程
[root@hadoop02 hadoop-2.7.1]# sbin/yarn-daemon.sh start proxyserver
[root@hadoop02 hadoop-2.7.1]# jps
4112 ResourceManager
3491 DataNode
4547 WebAppProxyServer
4583 Jps
4206 NodeManager

比之前多了一个
[root@hadoop01 hadoop-2.7.1]# jps
4737 DataNode
7201 JobHistoryServer
5157 ResourceManager
7289 Jps
4925 SecondaryNameNode
4639 NameNode
5247 NodeManager

打开浏览器
分别输入
http://192.168.213.140:50070/
http://192.168.213.141:8088/
都可以正常打开
使用hadoop测试数据
[root@hadoop01 hadoop-2.7.1]# bin/hdfs dfs -put /etc/profile /profile
[root@hadoop01 hadoop-2.7.1]# bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar wordcount /profile /out1

其中out1,为结果的保存路径,如果文件夹已存在会提示
Output directory hdfs://127.0.0.1:9000/out already exists