虚拟机Hadoop-2.7.1环境搭建(CentOS-6.5)

来源:互联网 发布:可以修改图片的软件 编辑:程序博客网 时间:2024/05/20 04:09

首先,需要安装三个虚拟机,安装CentOS-6.5-x86_64-bin-DVD1.bin(要用64位的)
192.168.0.6 master
192.168.0.7 slaver1
192.168.0.8 slaver2
三个虚拟机尽量分开安装,先分配较多的资源,这样节省安装时间,电脑配置高的,你懂得,这都不是事儿

接下来,配置ssh服务,按照文章中的顺序来操作
master:

vim /etc/hosts127.0.0.1   master    localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.0.6     master192.168.0.7     slaver1192.168.0.8     slaver2/*保存退出*/ssh-keygen -t rsacd /root/.sshcat id_rsa.pub >> authorized_keyschmod 600 authorized_keyscd ..chmod 700 -R .ssh

slaver1 和 slaver2 执行步骤相同,以salver1为例:

hostname slaver1vim /etc/sysconfig/network/HOSTNAME=slaver1/*保存退出*/vim /etc/hosts127.0.0.1   slaver1    localhost.localdomain localhost4 localhost4.localdomain4::1         localhost  localhost.localdomain localhost6 localhost6.localdomain6/*保存退出*/ssh-keygen -t rsa

你的slaver2配置好了吗?别忘记!

接着master需要执行的:

cd /root/.sshscp id_rsa.pub root@192.168.0.7:~/.ssh//*注:Are you sure you want to continueconnecting (yes/no)? yes  *//*注:root@192.168.0.7's password: 输入密码即可,回车*/scp id_rsa.pub root@192.168.0.8:~/.ssh//*注:Are you sure you want to continueconnecting (yes/no)? yes  *//*注:root@192.168.0.8's password: 输入密码即可,回车*/

接着slaver1和slaver2需要执行的:以slaver1为例

cd /root/.sshcat id_dsa.pub >> ~/.ssh/authorized_keyschmod 600 authorized_keyscd ..chmod 700 -R .ssh 

上面我们完成了,master 在 ssh 到slaver1和slaver2的时候,不需要输入密码

[root@master .ssh]# ssh 192.168.0.7  Last login: Tue OCT 10 11:25:66 2017  [root@slaver1 ~]#  

下面是三台虚拟机都需要设置的内容:

/*需要将jdk-8u65-linux-x64.tar.gz拷贝到虚拟机中虚拟机是VMware的,直接从windows中拖动压缩包到虚拟机中即可*/mv jdk-8u65-linux-x64.tar.gz /homecd /hometar xvzf jdk-8u65-linux-x64.tar.gz/*等待解压结束*//*同理将hadoop-2.7.1_64bit.tar.gz拷贝到虚拟机中*/mv hadoop-2.7.1_64bit.tar.gz /homecd /hometar xvzf hadoop-2.7.1_64bit.tar.gz/*等待解压结束*//*配置java的环境变量*/vim /etc/profile/*在文件中输入i,vim中i表示输入,按下i后,才能输入,在文件最下面写*/export  JAVA_HOME=/home/jdk1.8.0_65export  CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport  PATH=$PATH:$JAVA_HOME/bin export  HADOOP_HOME=/home/hadoop-2.7.1  export  PATH=$PATH:$HADOOP_HOME/bin /*按Esc键,输入wq(表示的是保存退出),回车*//*使配置文件生效*/source /etc/profilejava -version /*若显示版本信息,表示环境变量设置成功*//*接下来修改hadoop-2.7.1中的文件*/cd /home/hadoop-2.7.1/etc/hadoop /*需要修更改的文件就在这一层*//*修改export JAVA_HOME*/vim hadoop-env.sh              export  JAVA_HOME=/home/jdk1.8.0_65/*保存退出*//*修改slaves*/vim slaves/*删除localhost,添加 slaver1 slave2*/slaver1slaver2/*保存退出*//*修改core-site.xml*/vim core-site.xml/*由于需要填写的内容太多,可以用gedit打开文件,之后Crtl+C,Crtl+V,你懂的*/<configuration>  <property>                  <name>fs.defaultFS</name>                  <value>hdfs://master:8020</value>          </property>          <property>                  <name>io.file.buffer.size</name>                  <value>131072</value>          </property>          <property>                  <name>hadoop.tmp.dir</name>                  <value>file:/opt/hadoop/tmp</value>          </property>          <property>                  <name>fs.hdfs.impl</name>                  <value>org.apache.hadoop.hdfs.DistributedFileSystem</value>                  <description>The FileSystem for hdfs: uris.</description>          </property>          <property>                  <name>fs.file.impl</name>                  <value>org.apache.hadoop.fs.LocalFileSystem</value>                  <description>The FileSystem for hdfs: uris.</description>      </property>  </configuration>  /*保存退出*//*修改hdfs-site.xml*/vim hdfs-site.xml<configuration>  <property>                  <name>dfs.namenode.name.dir</name>                  <value>file:/opt/hadoop/dfs/name</value>          </property>          <property>                  <name>dfs.datanode.data.dir</name>                  <value>file:/opt/hadoop/dfs/data</value>          </property>          <property>                  <name>dfs.replication</name>                      <value>2</value>           </property>  </configuration>  /*保存退出*//*修改yarn-site.xml*/vim yarn-site.xml<configuration>  <!-- Site specific YARN configuration properties -->  <property>                  <name>yarn.resourcemanager.address</name>                  <value>master:8032</value>          </property>          <property>                  <name>yarn.resourcemanager.scheduler.address</name>                  <value>master:8030</value>          </property>          <property>                  <name>yarn.resourcemanager.resource-tracker.address</name>                  <value>master:8031</value>          </property>          <property>                  <name>yarn.resourcemanager.admin.address</name>                  <value>master:8033</value>          </property>          <property>                  <name>yarn.resourcemanager.webapp.address</name>                  <value>master:8088</value>          </property>          <property>                  <name>yarn.nodemanager.aux-services</name>                  <value>mapreduce_shuffle</value>          </property>          <property>                  <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>                  <value>org.apache.hadoop.mapred.ShuffleHandler</value>          </property>  </configuration>  /*保存退出*//*修改mapred-site.xml*/cp mapred-site.xml.template mapred-site.xmlvim mapred-site.xml<configuration>  <property>                  <name>mapreduce.framework.name</name>                  <value>yarn</value>          </property>          <property>                  <name>mapreduce.jobhistory.address</name>                  <value>master:10020</value>          </property>          <property>                  <name>mapreduce.jobhistory.webapp.address</name>                  <value>master:19888</value>          </property>  </configuration>  /*保存退出*/

以上完成之后,我们在master虚拟机中启动hadoop

cd /home/hadoop-2.7.1bin/hdfs namenode -format/*等待完成,出现successfully formatted,表示格式化成功*/sbin/start-dfs.sh sbin/start-yarn.sh 

通过访问http://master:8088/cluster 即可查看hadoop

这里写图片描述