Hadoop2.7.0学习——完全分布式搭建

来源:互联网 发布:math js w3c 编辑:程序博客网 时间:2024/06/07 02:22

Hadoop2.7.0学习——完全分布式搭建

学习参考

http://www.07net01.com/2015/07/874408.html

在VM上新建两个虚拟机

在vm上新建两个虚拟机
hadoop-master
hadoop-node1
linux版本是
[红帽企业Linux.6.4.服务器版].rhel-server-6.4-x86_64-dvd[ED2000.COM]

搭建master和node1机器环境

关闭防火墙

关闭master和node1虚拟机的防火墙
打开终端
输入
service iptables stop 临时关闭,重启后失效
chkconfig iptables off 永久关闭
service iptables stauts 查看防火墙状态

关闭SELinux

输入
vim /etc/sysconfig/selinux
按i进入编辑模式
设置:SELINUX=disabled
按Esc进入退出编辑,输入:wq!回车,即为保存并退出,或者shift+z+z

配置主机ip

ip根据个人情况而定,本文ip设置如下
master的ip设置为192.168.20.141
node1的ip设置为192.168.20.142
右键下图图标



设置完成后重启网络
service network restart

设置虚拟机网络为桥接模式


配置主机名

主机名可自定义,本文主机名设置如下
master:hadoop-master
node1:hadoop-node1
代码如下
vi /etc/sysconfig/network
修改主机名为hadoop-master

生效:
source /etc/sysconfig/network
查看:
echo $HOSTNAME
效果图:

配置ip的映射关系

本文ip映射关系如下
master:192.168.20.141 hadoop-master
node1:192.168.20.142 hadoop-node1
vi /etc/hosts
添加 192.168.20.141 hadoop-master
添加 192.168.20.142 hadoop-node1

设置免登陆模式

生成秘钥,注意ssh-keygen之间无空格
ssh-keygen -t rsa
执行后连续点击回车,下图为成功图片

配置本机免密码登陆
ssh-copy-id 192.168.20.141
配置master免密码登录node1
ssh-copy-id 192.168.20.142
测试设置是否成功
ssh 192.168.20.142
第一次会需要输入密码,后面就不需要了
输入 ifconfig,会发现当前ip已经是192.168.20.142了
输入 exit 退出ssh连接
注意:在node1虚拟机上也需要设置免密码登录
重启 reboot

Java环境搭建

在master和node1虚拟机上搭建Java环境
上传rpm文件
java版本:jdk-7u80-linux-x64
执行命令
rpm -ivh jdk-7u80-linux-x64.rpm
java安装在usr下java中
配置/etc/profile,在最后添加

export JAVA_HOME=/usr/java/jdk1.7.0_80export PATH=$JAVA_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

source /etc/profile 使之生效

master虚拟机Hadoop设置

hadoop-2.7.0
1. 分卷1
2. 分卷2
3. 分卷3

下载好后,右键分卷1,解压即可

上传编译好的hadoop压缩包发送到master虚拟机的/usr/local/bigdata/tools,没有则新建对应目录
在bigdata目录新建tools和soft两个目录
解压
tar -zxf hadoop-2.7.0.tar.gz -C ../soft/
修改hadoop-2.7.0/etc/hadoop下对应配置文件
core-site.xml

<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--  Licensed under the Apache License, Version 2.0 (the "License");  you may not use this file except in compliance with the License.  You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.--><!-- Put site-specific property overrides in this file. --><configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://hadoop-master:9000</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/usr/local/bigdata/soft/hadoop-2.7.0/data/tmp</value>    </property></configuration>

hadoop-env.sh
找到export JAVA_HOME,设置之前设置的JAVA_HOME

export JAVA_HOME=/usr/java/jdk1.7.0_80

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--  Licensed under the Apache License, Version 2.0 (the "License");  you may not use this file except in compliance with the License.  You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.--><!-- Put site-specific property overrides in this file. --><configuration>    <property>         <name>dfs.replication</name>         <value>1</value>     </property>     <property>         <name>dfs.namenode.secondary.http-address</name>         <value>hadoop-master:9001</value>     </property> </configuration>

mapred-site.xml.template
将文件名修改为mapred-site

<?xml version="1.0"?><?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!--  Licensed under the Apache License, Version 2.0 (the "License");  you may not use this file except in compliance with the License.  You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.--><!-- Put site-specific property overrides in this file. --><configuration>    <property>         <name>mapreduce.framework.name</name>         <value>yarn</value>     </property>     <property>         <name>mapreduce.jobhistory.address</name>         <value>hadoop-master:10020</value>     </property>     <property>         <name>mapreduce.jobhistory.webapp.address</name>         <value>hadoop-master:19888</value>     </property> </configuration>

slaves
修改为节点的ip映射

hadoop-node1

yarn-site.xml

<?xml version="1.0"?><!--  Licensed under the Apache License, Version 2.0 (the "License");  you may not use this file except in compliance with the License.  You may obtain a copy of the License at    http://www.apache.org/licenses/LICENSE-2.0  Unless required by applicable law or agreed to in writing, software  distributed under the License is distributed on an "AS IS" BASIS,  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  See the License for the specific language governing permissions and  limitations under the License. See accompanying LICENSE file.--><configuration><!-- Site specific YARN configuration properties -->    <property>         <name>yarn.nodemanager.aux-services</name>         <value>mapreduce_shuffle</value>     </property>     <property>         <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>         <value>org.apache.hadoop.mapred.ShuffleHandler</value>     </property>     <property>         <name>yarn.resourcemanager.address</name>         <value>hadoop-master:8032</value>     </property> <property>         <name>yarn.resourcemanager.scheduler.address</name>         <value>hadoop-master:8030</value>     </property> <property>         <name>yarn.resourcemanager.resource-tracker.address</name>         <value>hadoop-master:8031</value>     </property>     <property>         <name>yarn.resourcemanager.admin.address</name>         <value>hadoop-master:8033</value>     </property>     <property>         <name>yarn.resourcemanager.webapp.address</name>         <value>hadoop-master:8088</value>     </property></configuration>

node1虚拟机Hadoop设置

直接将master设置好的hadoop文件夹拷贝到node1上
要保证node1虚拟机上也有对应位置的目录
scp -r hadoop-2.7.0 root@hadoop-node1:/usr/local/bigdata/soft/

启动

在hadoop目录下执行命令
sbin/start-dfs.sh
sbin/start-yarn.sh
输入jps命令查看启动效果
master:

node1:

访问管理页面,看到对应节点信息
http://192.168.20.141:50070/

测试是否成功

创建目录
bin/hdfs dfs -mkdir /tmp
在下图可以看到生成的目录


拷贝文件,可以看到对应文件
bin/hdfs dfs -copyFromLocal /etc/profile /tmp

0 0
原创粉丝点击