shell多机器分发文件以及执行命令

来源:互联网 发布:windows pe如何分区 编辑:程序博客网 时间:2024/05/01 12:00

网上看到的,很实用的工具。

首先需要设置好机器之间的ssh无密码通讯。

有三个文件,分别是
1.deploy.conf
2.deploy.sh
3.runRemoteCmd.sh

注意:
这三个文件都放在/home/hadoop/tools目录下;
deploy.conf配置文件的第一列是服务器的host名字,后面是对它的标识;

其中deploy.conf是配置文件:

#### NOTES# There is crontab job using this config file which would compact log files and remove old log file.# please be  carefully while modifying this file until you know what crontab exactly do#hdpcdhnode1,all,master,zk,cdhnode2,all,slave,zk,cdhnode3,all,slave,zk,jn,kfk,cdhnode4,all,slave,jn,kfk,cdhnode5,all,slave,jn,kfk,

deploy.sh是多机器分发脚本:

#!/bin/bash#set -xif [ $# -lt 3 ]then  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag"  echo "Usage: ./deply.sh srcFile(or Dir) descFile(or Dir) MachineTag confFile"  exitfisrc=$1dest=$2tag=$3if [ 'a'$4'a' == 'aa' ]then  confFile=/home/hadoop/tools/deploy.confelse  confFile=$4fiif [ -f $confFile ]then  if [ -f $src ]  then    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`    do       scp $src $server":"${dest}    done  elif [ -d $src ]  then    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`    do       scp -r $src $server":"${dest}    done  else      echo "Error: No source file exist"  fielse  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"fi

runRemoteCmd.sh是多机器执行命令脚本:

#!/bin/bash#set -xif [ $# -lt 2 ]then  echo "Usage: ./runRemoteCmd.sh Command MachineTag"  echo "Usage: ./runRemoteCmd.sh Command MachineTag confFile"  exitficmd=$1tag=$2if [ 'a'$3'a' == 'aa' ]then  confFile=/home/hadoop/tools/deploy.confelse  confFile=$3fiif [ -f $confFile ]then    for server in `cat $confFile|grep -v '^#'|grep ','$tag','|awk -F',' '{print $1}'`    do       echo "*******************$server***************************"       ssh $server "source ~/.bash_profile; $cmd"    doneelse  echo "Error: Please assign config file or run deploy.sh command with deploy.conf in same directory"fi

使用方法:

##分发hadoop包到标识为slave服务器的/home/hadoop/app/目录下:deploy.sh hadoop /home/hadoop/app/ slave##让标识为zk的所有服务器执行/home/hadoop/app/zookeeper/bin/zkServer.sh start命令runRemoteCmd.sh "/home/hadoop/app/zookeeper/bin/zkServer.sh start" zk
0 0
原创粉丝点击