Shell一键部署Hadoop集群说明手册v1.0.0
来源:互联网 发布:淘宝开店 客户达 编辑:程序博客网 时间:2024/05/31 18:50
一、一键部署需求分析1. 安装系统,搭建基础环境
安装Centos系统,在系统上安装Jdk,配置SSH服务免密码登陆
2. 一键部署Hadoop到集群各节点
下载Hadoop、Hbase、Zookeeper源码包到主节点,然后一键部署到各子节点并解压
3. 一键同步核心配置文件
在主节点将核心参数进行配置和调优后,同步配置文件到各子节点。
4. 一键启动/停止/重启服务
在主节点一键启动Zookeeper服务,Hadoop服务和Hbase服务。
二、一键部署操作说明1.详细操作步骤
(1)在虚拟集群各节点上,安装Centos系统,安装Jdk,配置SSH免密码登陆,详见“hadoop集群配置手册”
(2)执行一键部署脚本
(3)执行一键同步配置文件脚本(如果有改动)
(4)一键启动/停止/重启服务
#start-hadoop、start-hbase、start-zookeeper
#restart-all、restart-hadoop、restart-hbase、restart-zookeeper
#stop-all、stop-hadoop、stop-hbase、stop-zookeeper
2.成功部署状态
三、脚本实现及分析1.开发环境:
(1)软件版本:Centos6.4系统、Jdk1.6.0_25、Hadoop-1.0.3、Hbase-0.94.1-security、Zookeeper-3.4.3
(2)用户信息:三台实验节点(Master2、Slave21、Slave22),目标用户均为hadoop,目标安装路径均为/home/hadoop
2.脚本代码及注释
(1) 部署Hadoop到集群各节点的shell脚本
文件名:obd_install
#!/bin/bash
tar_name=obd.tar.gz
tar_path=/home/hadoop
name=hadoop
hosts="Slave21 Slave22"
echo "####### batch scp ########"
for host in $hosts;do
echo "------ scp to $name ! ------"
scp $tar_name $name@$host:$tar_path
done
echo "####### batch extract ########"
for host in $hosts;do
echo "------ extract in $name! ------"
ssh $host "tar-xvf $tar_name"
done
备注:obd.tar.gz中包含文件有:
(hadoop-1.0.3、hbase-0.94.1-security、zookeeper-3.4.3、temp)
其中文件temp/zookeeper/data/myid文件中的值需要手动配置,与zookeeper-3.4.3/conf/zoo.cfg中的server.后的数值匹配。
(2) 一键同步核心配置文件的shell脚本
文件名:obd_update_conf
#!/bin/bash
ha_path=/home/hadoop/hadoop-1.0.3
hb_path=/home/hadoop/hbase-0.94.1-security
zk_path=/home/hadoop/zookeeper-3.4.3
name=hadoop
hosts="Slave21 Slave22"
for host in $hosts;do
echo "########### update $host conf ###########"
scp -r $ha_path/conf $name@$host:$ha_path
scp -r $hb_path/conf $name@$host:$hb_path
scp -r $zk_path/conf $name@$host:$zk_path
done
echo "########## end update! ##############"
备注:更改主节点上的配置参数后,所有的核心配置文件件都将同步一遍,尚未进行区分同步。
(3) 一键启动/停止/重启服务的shell脚本
文件名:obd_service
#!/bin/bash
COMMAND=$1
EXTRA=$2
HADOOP_PATH=/home/hadoop/hadoop-1.0.3
HBASE_PATH=/home/hadoop/hbase-0.94.1-security
ZK_PATH=/home/hadoop/zookeeper-3.4.3
HOSTS="Master2 Slave21 Slave22"
#------------- restart ----------------
if [ "$COMMAND" = "restart-all" ]; then
for host in$HOSTS;do
ssh $host"$ZK_PATH/bin/zkServer.sh stop"
done
$HBASE_PATH/bin/stop-hbase.sh
$HADOOP_PATH/bin/stop-all.sh
for host in$HOSTS;do
ssh $host"$ZK_PATH/bin/zkServer.sh start"
done
$HADOOP_PATH/bin/start-all.sh
$HBASE_PATH/bin/start-hbase.sh
elif [ "$COMMAND" = "restart-hadoop" ];then
$HBASE_PATH/bin/stop-hbase.sh
$HADOOP_PATH/bin/stop-all.sh
$HADOOP_PATH/bin/start-all.sh
$HBASE_PATH/bin/start-hbase.sh
elif [ "$COMMAND" = "restart-hbase" ];then
$HADOOP_PATH/bin/stop-all.sh
$HADOOP_PATH/bin/start-all.sh
#------------- start ----------------
elif [ "$COMMAND" = "start-all" ]; then
for host in$HOSTS;do
ssh $host"$ZK_PATH/bin/zkServer.sh start"
echo "#####start $host zkServer succeed! #####"
done
$HADOOP_PATH/bin/start-all.sh
$HBASE_PATH/bin/start-hbase.sh
elif [ "$COMMAND" = "start-hadoop" ];then
$HADOOP_PATH/bin/start-all.sh
elif [ "$COMMAND" = "start-hbase" ];then
$HBASE_PATH/bin/start-hbase.sh
elif [ "$COMMAND" = "start-zookeeper" ];then
for host in$HOSTS;do
ssh $host"$ZK_PATH/bin/zkServer.sh start"
echo "#####start $host zkServer succeed! #####"
done
#------------- stop ----------------
elif [ "$COMMAND" = "stop-all" ]; then
$HBASE_PATH/bin/stop-hbase.sh
$HADOOP_PATH/bin/stop-all.sh
for hostin $HOSTS;do
ssh $host"$ZK_PATH/bin/zkServer.sh stop"
done
elif [ "$COMMAND" = "stop-hadoop"]; then
$HADOOP_PATH/bin/stop-all.sh
elif [ "$COMMAND" = "stop-hbase"]; then
$HBASE_PATH/bin/stop-hbase.sh
elif [ "$COMMAND" = "stop-hbase"]; then
$HBASE_PATH/bin/stop-hbase.sh
elif [ "$COMMAND" ="stop-zookeeper" ]; then
echo"### closing hbase service! ###"
$HBASE_PATH/bin/stop-hbase.sh
echo"### closing zookeeper service! ###"
for hostin $HOSTS;do
ssh $host"$ZK_PATH/bin/zkServer.sh stop"
done
#------------- other command -----------
elif [ "$COMMAND" = "zookeeper" ];then
for hostin $HOSTS;do
echo"##### result in $host ! #####"
ssh $host"$ZK_PATH/bin/zkServer.sh $EXTRA"
done
#------------- help ----------------
else
echo"---------- command list ---------------"
echo"restart-all | restart-hadoop | restart-hbase"
echo"start-all | start-hadoop | start-hbase"
echo"stop-all | stop-hadoop | stop-hbase"
fi
备注:其中zookeeper服务的开启需要执行SSH远程控制命令,当子节点防火墙开启时,该命令尚不能正确执行。
四、小结 通过开发与测试,初步完成了通过shell脚本的一键安装和同步,脚本尚不完善,但基本功能可以实现。
由于时间紧迫,“基础环境的一键安装”涉及技术繁琐,后续应用频率不高,该次未做开发;“一键同步配置文件”测试阶段应用频率最高,从而进行了重点开发。
2.
下载Hadoop、Hbase、Zookeeper源码包到主节点,然后一键部署到各子节点并解压
3.
在主节点将核心参数进行配置和调优后,同步配置文件到各子节点。
4.
在主节点一键启动Zookeeper服务,Hadoop服务和Hbase服务。
二、一键部署操作说明1.详细操作步骤
(1)在虚拟集群各节点上,安装Centos系统,安装Jdk,配置SSH免密码登陆,详见“hadoop集群配置手册”
(2)执行一键部署脚本
>chmod +x obd_install
(3)执行一键同步配置文件脚本(如果有改动)
>chmod +x obd_update_conf
(4)一键启动/停止/重启服务
>chmod +x obd_service
>./obd_servicestart-all
#start-hadoop、start-hbase、start-zookeeper
#restart-all、restart-hadoop、restart-hbase、restart-zookeeper
#stop-all、stop-hadoop、stop-hbase、stop-zookeeper
2.成功部署状态
三、脚本实现及分析1.开发环境:
(1)软件版本:Centos6.4系统、Jdk1.6.0_25、Hadoop-1.0.3、Hbase-0.94.1-security、Zookeeper-3.4.3
(2)用户信息:三台实验节点(Master2、Slave21、Slave22),目标用户均为hadoop,目标安装路径均为/home/hadoop
2.脚本代码及注释
(1)
文件名:obd_install
#!/bin/bash
tar_name=obd.tar.gz
tar_path=/home/hadoop
name=hadoop
hosts="Slave21 Slave22"
echo "####### batch scp ########"
for host in $hosts;do
echo "------ scp to $name ! ------"
scp $tar_name $name@$host:$tar_path
done
echo "####### batch extract ########"
for host in $hosts;do
echo "------ extract in $name! ------"
ssh $host
done
备注:obd.tar.gz中包含文件有:
(hadoop-1.0.3、hbase-0.94.1-security、zookeeper-3.4.3、temp)
其中文件temp/zookeeper/data/myid文件中的值需要手动配置,与zookeeper-3.4.3/conf/zoo.cfg中的server.后的数值匹配。
(2)
文件名:obd_update_conf
#!/bin/bash
ha_path=/home/hadoop/hadoop-1.0.3
hb_path=/home/hadoop/hbase-0.94.1-security
zk_path=/home/hadoop/zookeeper-3.4.3
name=hadoop
hosts="Slave21 Slave22"
for host in $hosts;do
echo "########### update $host conf ###########"
scp -r $ha_path/conf $name@$host:$ha_path
scp -r $hb_path/conf $name@$host:$hb_path
scp -r $zk_path/conf $name@$host:$zk_path
done
echo "########## end update! ##############"
备注:更改主节点上的配置参数后,所有的核心配置文件件都将同步一遍,尚未进行区分同步。
(3)
文件名:obd_service
#!/bin/bash
COMMAND=$1
EXTRA=$2
HADOOP_PATH=/home/hadoop/hadoop-1.0.3
HBASE_PATH=/home/hadoop/hbase-0.94.1-security
ZK_PATH=/home/hadoop/zookeeper-3.4.3
HOSTS="Master2 Slave21 Slave22"
#------------- restart ----------------
if [ "$COMMAND" = "restart-all" ]; then
elif [ "$COMMAND" = "restart-hadoop" ];then
elif [ "$COMMAND" = "restart-hbase" ];then
#------------- start ----------------
elif [ "$COMMAND" = "start-all" ]; then
elif [ "$COMMAND" = "start-hadoop" ];then
elif [ "$COMMAND" = "start-hbase" ];then
elif [ "$COMMAND" = "start-zookeeper" ];then
#------------- stop ----------------
elif [ "$COMMAND" = "stop-all" ]; then
elif [ "$COMMAND" = "stop-hadoop"]; then
elif [ "$COMMAND" = "stop-hbase"]; then
elif [ "$COMMAND" = "stop-hbase"]; then
elif [ "$COMMAND" ="stop-zookeeper" ]; then
#------------- other command -----------
elif [ "$COMMAND" = "zookeeper" ];then
#------------- help ----------------
else
fi
备注:其中zookeeper服务的开启需要执行SSH远程控制命令,当子节点防火墙开启时,该命令尚不能正确执行。
四、小结
由于时间紧迫,“基础环境的一键安装”涉及技术繁琐,后续应用频率不高,该次未做开发;“一键同步配置文件”测试阶段应用频率最高,从而进行了重点开发。
0 0
- Shell一键部署Hadoop集群说明手册v1.0.0
- Hadoop系列一:Hadoop集群分布式部署
- hadoop集群环境部署之shell
- Hadoop集群搭建之二 集群环境部署说明+SSH
- 学习Hadoop部署集群环镜(一)
- Kubernetes部署大数据组件系列二:一键部署Hadoop集群
- shell 一键部署
- 集群上部署hadoop
- hadoop集群部署lzo
- hadoop集群部署
- Hadoop 集群部署介绍
- hadoop集群部署
- Hadoop集群部署
- Hadoop集群环境部署
- hadoop集群部署介绍
- hadoop集群部署常见问题
- hadoop集群部署
- Hadoop集群部署笔记
- Hibernate1
- 1、基础命令和基本知识
- 春雪
- Map集合遍历
- 【运维】Windows下MRTG入门教程
- Shell一键部署Hadoop集群说明手册v1.0.0
- 70:Populating Next Right Pointers in Each Node II
- 数学问题--PAT.A1081. Rational Sum
- 康复计划#5 Matrix-Tree定理(生成树计数)的另类证明和简单拓展
- Node.js介绍与安装使用
- 十进制转换任意进制_CJ
- .NET中使用Redis (一)
- 随笔 #2#2
- [sd card] sd card初始化流程