在线扩容datanode脚本(用fabric)

来源:互联网 发布:linux大作业 编辑:程序博客网 时间:2024/05/16 23:35

环境说明:

在扩容之前,我的集群是三台机器,并且已经开启了的

hadoop版本:hadoop-2.7.3

192.168.40.140    hd1         NameNode

192.168.40.144    hd4         即将被扩容的datanode机器

第一步:将NameNode和datanode各自的root用户的秘钥拷贝过去,以免运行脚本时自己输入

生成公钥和私钥

ssh-keygen -t rsa

将公钥和私钥拷贝到目标主机

ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.40.140

在两台机器上都执行以上命令(需更改目标主机ip)

第二步:脚本,我将脚本命名为amplify_datanode.py

from fabric.api import run, local, env, rolesimport osimport sysimport getoptenv.roledefs = {'master': ['192.168.40.140'], 'datanode': ['192.168.40.144']}env.hosts = '192.168.40.140'@roles('master')def master(local_ip, hostname):    run('echo %s %s >> /etc/hosts' % (local_ip, hostname))    run('echo %s>>/usr/local/hadoop/hadoop-2.7.3/etc/hadoop/slaves' % hostname)    run('scp -r /usr/jdk1.8.0_131 %s:/usr' % hostname)    run('scp -r /usr/local/hadoop %s:/usr/local' % hostname)def datanode(local_ip, hostname):    local('useradd hadoop')    local('echo "hadoop" | passwd --stdin hadoop')    local('mkdir /home/hadoop/.ssh')    local('chown -R hadoop:hadoop /home/hadoop')    local('cp ~/.ssh/authorized_keys /home/hadoop/.ssh')    local('chown -R hadoop:hadoop /usr/local/hadoop')    local('echo input /etc/profile')    local('echo export JAVA_HOME=/usr/jdk1.8.0_131>>/etc/profile')    local('echo export JAVA_BIN=/usr/jdk1.8.0_131/bin>>/etc/profile')    local('echo export PATH =$PATH:/usr/jdk1.8.0_131/bin>>/etc/profile')    local('echo export CLASSPATH=.:/usr/jdk1.8.0_131/lib/dt.jar:/usr/jdk1.8.0_131/lib/tools.jar>>/etc/profile')    local('echo export JAVA_HOME JAVA_BIN PATH CLASSPATH>>/etc/profile')    local('source /etc/profile')    local('echo write into /etc/network')    local('echo NETWORKING = yes>/etc/sysconfig/network')    local('echo HOSTNAME = %s>>/etc/sysconfig/network' % hostname)    local('echo input /etc/hosts')    local('echo 192.168.40.140 hd1>>/etc/hosts')    local('echo 192.168.40.141 hd2>>/etc/hosts')    local('echo 192.168.40.142 hd3>>/etc/hosts')    local('echo %s %s>>/etc/hosts' % (local_ip, hostname))    local('echo input .bash_profile')    local('echo export JAVA_HOME=/usr/jdk1.8.0_131>>/home/hadoop/.bash_profile')    local('echo export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.3>>/home/hadoop/.bash_profile')    local('echo export PATH=$PATH:/usr/jdk1.8.0_131/bin:/usr/local/hadoop/hadoop-2.7.3/bin>>/home/hadoop/.bash_profile')    local('echo export JAVA_HOME HADOOP_HOME PATH>>/home/hadoop/.bash_profile')    local('source /home/hadoop/.bash_profile')    local('/usr/local/hadoop/hadoop-2.7.3/sbin/hadoop-daemon.sh start datanode')def do_work(local_ip,hostname):    master(local_ip, hostname)    datanode(local_ip, hostname)
第三步:运行脚本

fab -f amplify_datanode.py dowork:local_ip=192.168.40.144,hostname=hd4

等待脚本运行完成,基本就OK了

第四步:在NameNode上刷新节点

...bin/hdfs dfsadmin -refreshNode

...sbin/start-balancer.sh

原创粉丝点击