ubuntu16.04 + hadoop2.7.2 完全分布式集群搭建(备查)

来源:互联网 发布:程序员鄙视链图 编辑:程序博客网 时间:2024/05/17 18:23

一、hosts

/etc/hosts
master----------------------127.0.0.1   localhost192.168.64.100 master192.168.64.101 slave1192.168.64.102 slave2192.168.64.103 slave3192.168.64.104 slave4-----------------------slave1-----------------------127.0.0.1       localhost127.0.1.1       slave1192.168.64.100 master192.168.64.101 slave1192.168.64.102 slave2192.168.64.103 slave3192.168.64.104 slave4-----------------------slave2-----------------------127.0.0.1       localhost127.0.1.1       slave2192.168.64.100 master192.168.64.101 slave1192.168.64.102 slave2192.168.64.103 slave3192.168.64.104 slave4-----------------------slave3-----------------------127.0.0.1       localhost127.0.1.1       slave3192.168.64.100 master192.168.64.101 slave1192.168.64.102 slave2192.168.64.103 slave3192.168.64.104 slave4-----------------------slave4-----------------------127.0.0.1       localhost192.168.64.100 master192.168.64.101 slave1192.168.64.102 slave2192.168.64.103 slave3192.168.64.104 slave4

二、environment

hadoop-env.sh-----------------export JAVA_HOME=/usr/local/jvm/jdk-----------------slaves:-----------------slave1slave2slave3---------------------core-site.xml:--------------------<configuration>    <property>        <name>hadoop.tmp.dir</name>        <value>file:/usr/local/hadoop/tmp</value>    </property>    <property>        <name>fs.defaultFS</name>        <value>hdfs://master:8020</value>    </property></configuration>-------------------------  hdfs-site.xml:------------------------<configuration>    <property>        <name>dfs.namenode.secondary.http-address</name>            <value>slave4:50090</value>    </property>    <property>            <name>dfs.replication</name>            <value>3</value>    </property>    <property>              <name>dfs.namenode.name.dir</name>              <value>file:/usr/local/hadoop/tmp/dfs/name1,file:/usr/local/hadoop/tmp/dfs/name2</value>      </property>    <property>              <name>dfs.datanode.data.dir</name>              <value>file:/usr/local/hadoop/tmp/dfs/data1,file:/usr/local/hadoop/tmp/dfs/data2</value>      </property>   </configuration>----------------------------------------------------------mapred-site.xml(复制mapred-site.xml.template,再修改文件名)----------------------------------------------------------<configuration>    <property>            <name>mapreduce.framework.name</name>            <value>yarn</value>    </property>    <property>            <name>mapreduce.jobhistory.address</name>            <value>Master:10020</value>    </property>    <property>            <name>mapreduce.jobhistory.webapp.address</name>            <value>Master:19888</value>    </property></configuration>----------------------yarn-site.xml---------------------<configuration>  <!-- Site specific YARN configuration properties -->      <property>          <name>yarn.resourcemanager.hostname</name>          <value>master</value>      </property>      <property>          <name>yarn.nodemanager.aux-services</name>          <value>mapreduce_shuffle</value>      </property></configuration>-------------------------

三、namenode -format

hdfs namenode -format

四、shell命令

1、分发复制

xsync

#!/bin/bashpcount=$#if (( pcount<1 )); then    echo no args;    exit;fi#p1=$1;fname=`basename $p1`#echo fname=$fname;pdir=`cd -P $(dirname $p1); pwd`#echo pdir=$pdircuser=`whoami`for(( host=1; host<5;host=host+1)); do   echo ----------------s$host----------    rsync -rvl $pdir/$fname $cuser@slave$host:$pdirdone

rsync xxx

2、集群操作命令

xcall

#!/bin/bashpcount=$#if (( pcount<1 )); then    echo no args;    exit;fi#p1=$1;fname=`basename $p1`#echo fname=$fname;echo    --------master--------$@cuser=`whoami`for(( host=1; host<5;host=host+1)); do   echo ---------slave$host--------    ssh slave$host $@done

xcall rm -rf xxx
xcall jps

0 0
原创粉丝点击