Hadoop集群(三节点)安装与部署
来源:互联网 发布:sql 查询生成临时表 编辑:程序博客网 时间:2024/04/30 09:55
1.2.1 环境准备
环境由三台服务器组成,分别为目录节点,内容节点,服务器列表如下所示:
表1 主机环境准备
IP
机器名称
10.0.0.201
m1.hadoop
10.0.0.209
s1.hadoop
10.0.0.211
s2.hadoop
下面列出各主机配置信息:
主机:m1.hadoop
[hadoop@m1 .ssh]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
IPADDR=10.0.0.201
PREFIX=24
GATEWAY=10.0.0.254
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
HWADDR=10:50:56:AF:00:CF
[hadoop@m1 .ssh]$ cat /etc/hosts
10.0.0.201 m1.hadoop
10.0.0.209 s1.hadoop
10.0.0.211 s2.hadoop
127.0.0.1 localhost.localdomain localhost
[hadoop@m1 .ssh]$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=m1.hadoop
FORWARD_IPV4=yes
主机:s1.hadoop
[hadoop@s1 .ssh]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
HWADDR=10:50:56:AF:00:D4
TYPE=Ethernet
BOOTPROTO=none
IPADDR=10.0.0.209
PREFIX=24
GATEWAY=10.0.0.254
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
[hadoop@s1 .ssh]$ cat /etc/hosts
10.0.0.209 s1.hadoop
10.0.0.201 m1.hadoop
10.0.0.211 s2.hadoop
127.0.0.1 localhost.localdomain localhost
[hadoop@s1 .ssh]$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=s1.hadoop
主机:s2.hadoop
[hadoop@s2 .ssh]$ cat /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
HWADDR=01:50:56:AF:00:D7
TYPE=Ethernet
BOOTPROTO=none
IPADDR=10.0.0.211
PREFIX=24
GATEWAY=10.0.0.254
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
[hadoop@s2 .ssh]$ cat /etc/hosts
10.0.0.211 s2.hadoop
10.0.0.201 m1.hadoop
10.0.0.209 s1.hadoop
127.0.0.1 localhost.localdomain localhost
[hadoop@s2 .ssh]$ cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=s2.hadoop
1.2.2 Java多机安装
将下载到后java文件传至各主机/home目录中,下面可以进行对其进行安装:
[root@s1 home]# chmod u+x jdk-6u25-linux-x64-rpm.bin
[root@s1 home]# ./jdk-6u25-linux-x64-rpm.bin
1.2.3 SSH配置
在每台机器上创建hadoop帐户,在每台机器生成hadoop的的公私钥对,分别将上述公钥对写入到authorized_keys文件之中,将authorized_keys分别分发至各个主机~/.ssh/目录之中即可。
具体设置过程如下所示:
S1.hadoop主机:
[root@ s1 .ssh]# useradd hadoop #创建帐号
[root@ s1 .ssh]# passwd hadoop #配置密码
[root@ s1 .ssh 5]# su hadoop
[hadoop@s1 .ssh]$ssh-keygen
[hadoop@s1 .ssh]$chmod 700 ~/.ssh/
[hadoop@m1 .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop@m1 .ssh]$ chmod 600 authorized_keys
[hadoop@m1 .ssh]$ scp authorized_keys hadoop@s2.hadoop:/home/hadoop/.ssh/
s2.hadoop主机:
[root@ s2 .ssh]# useradd hadoop #创建帐号
[root@ s2 .ssh]# passwd hadoop #配置密码
[root@ s2 .ssh 5]# su hadoop
[hadoop@s2 .ssh]$ssh-keygen
[hadoop@s2 .ssh]$chmod 700 ~/.ssh/
[hadoop@m2 .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop@m1 .ssh]$ scp authorized_keys hadoop@m1.hadoop:/home/hadoop/.ssh/
m1.hadoop主机:
[root@ s1 .ssh]# useradd hadoop #创建帐号
[root@ s1 .ssh]# passwd hadoop #配置密码
[root@ s1 .ssh 5]# su hadoop
[hadoop@s1 .ssh]$ssh-keygen
[hadoop@s1 .ssh]$chmod 700 ~/.ssh/
[hadoop@m1 .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop@m1 .ssh]$ scp authorized_keys hadoop@s1.hadoop:/home/hadoop/.ssh/
[hadoop@m1 .ssh]$ scp authorized_keys hadoop@s2.hadoop:/home/hadoop/.ssh/
1.2.4 Hadoop多机安装
Hadoop安装与配置过程见1.1.4节,先在m1.hadoop主机配置hadoop,安装hadoop、配置访问权限、配置环境变量:
具体操作过程(m1.hadoop):
[root@m1 home]# tar xzvf hadoop-0.20.2.tar.gz
[root@ m1home]# mv hadoop-0.20.2 /usr/local
[root@ m1home]# cd /usr/local
[root@ m1local]# ls
bin etc games hadoop-0.20.2 include lib lib64 libexec sbin share src
[root@ m1local]# mv hadoop-0.20.2/ hadoop
[root@ m1local]# mkdir hadoop/Data
[root@ m1local]# mkdir hadoop/Name
[root@ m1local]# mkdir hadoop/Tmp
[root@ m1local]# chmod 777 /var/local
[root@ m1local]# ls
bin etc games hadoop include lib lib64 libexec sbin share src
[root@ m1local]# chown -R hadoop:hadoop /usr/local/hadoop/ #修改权限
[root@m1 conf]# vi core-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://m1.hadoop:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/Tmp</value>
</property>
</configuration>
[root@m1 conf]# vi hdfs-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop/Name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop/Data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
[root@m1 conf]# vi masters
m1.hadoop
[root@m1 conf]# vi slaves
m1.hadoop
s1.hadoop
s2.hadoop
[root@m1 conf]# vi mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>m1.hadoop:9001</value>
</property>
</configuration>
[root@ m1local]# scp -r /usr/local/hadoop s1.hadoop:/usr/local/
[root@ m1local]# scp -r /usr/local/hadoop s2.hadoop:/usr/local/
(s1.hadoop):
[root@ s1local]# chmod 777 /var/local
(s2.hadoop):
[root@ s2local]# chmod 777 /var/local
1.2.5 Hadoop测试
[root@m1 conf]# jps
10209 Jps
9057 SecondaryNameNode
9542 SecondaryNameNode
7217 JobTracker
10087 TaskTracker
9450 DataNode
- Hadoop集群(三节点)安装与部署
- Ceph实战入门系列(一)——三节点Ceph集群的安装与部署
- Hadoop 三节点集群安装配置详细实例
- hadoop 三节点集群安装配置详细实例
- hadoop 1.x 三节点集群安装配置详细实例
- mongo-三节点集群安装
- Hadoop分布式集群环境搭建(三节点)
- elasticsearch2.4.0三节点集群部署
- Hadoop三节点集群搭建-上篇
- Hadoop三节点集群搭建-下篇
- hadoop集群安装部署
- Hadoop集群安装部署
- hadoop集群安装部署
- hadoop 集群安装(二) HA部署
- 安装ambari部署hadoop集群
- [Hadoop培训笔记]02-HDFS集群的安装与部署
- [Hadoop培训笔记]03-MapReduce集群的安装与部署
- hadoop学习3-MapReduce的集群安装与部署
- ZOJ 3362 Beer Problem 最小费用最大流
- Android开发_libgdx游戏引擎教程外篇 优美的自定义进度条 (八)
- C语言实现的线性表 函数形参:指针类型与变量类型的区别 (SqList *L)(SqList L)
- 服务器变量 $_SERVER 详解:
- 我知道你不知道的负Margin
- Hadoop集群(三节点)安装与部署
- 用Margin还是用Padding
- POJ3009--Curling 2.0
- 推荐:7个有用的HTML5学习资源
- vmware虚拟机安装--心得之三主机文件与虚拟机文件共享的方法
- Fibonacci数计算中的两个思维盲点及其扩展数列的通用高效解法
- Qt namespace Ui
- CListCtrl使用技巧<转载与 祥龙之子 的博客>
- CXF WebService笔记day1.txt