hadoop编程入门学习笔记-1 安装运行hadoop

来源:互联网 发布:unity3d 布料 编辑:程序博客网 时间:2024/05/14 21:10

一、基础环境

 主机虚拟机1虚拟机2虚拟机3名称 hadoop.masterhadoop.slave01hadoop.slave02IP192.168.206.1192.168.206.120192.168.206.121192.168.206.122操作系统win7 64位centOS 6.4 64位centOS 6.4 64位centOS 6.4 64位CPU核i5 4核111RAM8 GB2 GB2 GB2 GB硬盘1T20 GB20 GB20 GB     

二、安装配置

1. 在三个虚拟机上创建hadoop组和hadoop用户, 创建完后用id hadoop看一下,我的机器上是uid=500(hadoop) gid=5000(hadoop) 组=500(hadoop)
    
su - groupadd hadoopuseradd -g hadoop hadoop

2. 配置hadoop.master、hadoop.slave01、hadoop.slave02三个虚拟机免密码登录
    1) 在/etc/hosts文件增加以下3行(3台机器上都要增加) 
   192.168.206.120  hadoop.master   192.168.206.121  hadoop.slave01   192.168.206.122  hadoop.slave02
   2)生成authorized_keys文件
在hadoop.master的 /home/hadoop目录下执行  ssh-keygen -t rsa 遇提问,直接回车,在~/.ssh目录下得到id_rsa,id_rsa.pub两个文件     在hadoop.master的/home/hadoop目录下执行 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys 在~/.ssh目录下得到authorized_keys文件     在hadoop.slave01、hadoop.slave02 的/home/hadoop目录下执行  ssh-keygen -t rsa 遇提问,直接回车,在~/.ssh目录下得到id_rsa,id_rsa.pub两个文件     在hadoop.master的/home/hadoop目录下执行 scp ~/.ssh/authorized_keys hadoop@hadoop.slave01:~/.ssh ,拷贝authorized_keys文件到hadoop.slave01/home/hadoop/.ssh/authorized_keys     在hadoop.slave01的/home/hadoop目录下执行 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys     在hadoop.slave01的/home/hadoop目录下执行 scp ~/.ssh/authorized_keys hadoop@hadoop.slave02:~/.ssh ,拷贝authorized_keys文件到hadoop.slave02/home/hadoop/.ssh/authorized_keys     在hadoop.slave02的/home/hadoop目录下执行 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys    在hadoop.slave02的/home/hadoop目录下执行 scp ~/.ssh/authorized_keys hadoop@hadoop.slave01:~/.ssh ,拷贝authorized_keys文件到hadoop.slave01/home/hadoop/.ssh/authorized_keys    在hadoop.slave02的/home/hadoop目录下执行 scp ~/.ssh/authorized_keys hadoop@hadoop.master:~/.ssh ,拷贝authorized_keys文件到hadoop.master/home/hadoop/.ssh/authorized_keys     在hadoop.master、hadoop.slave01、hadoop.slave02上执行chmod 700 ~/.ssh和chmod 600 ~/.ssh/authorized_keys/home/hadoop
 
3. 在hadoop.maste的主目录(/home/hadoop)下建目录cloud,将软件hadoop-2.6.0.tar.gz用tar命令解压缩到/home/hadoop/cloud目录,用mv命令对解压缩后的目录改名为hadoop,更改后的目录为/home/hadoop/cloud/hadoop
4. 配置core-site.xml
<configuration>    <property>        <name>fs.default.name</name>        <value>hdfs://hadoop.master:9000</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/home/hadoop/cloud/hdtmp</value>    </property></configuration>
5.配置hdfs-site.xml
<configuration>    <property>        <name>dfs.replication</name>        <value>3</value>    </property>    <property>        <name>dfs.namenode.name.dir</name>        <value>/home/hadoop/cloud/hdname</value>    </property>    <property>        <name>dfs.datanode.data.dir</name>        <value>/home/hadoop/cloud/hddata</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/home/hadoop/cloud/hdtmp/</value>    </property></configuration>
6.配置mapred-site.xml
<configuration>    <property>        <name>mapreduce.framework.name</name>        <value>yarn</value>    </property></configuration>
7.配置yarn-site.xml
<configuration>   <property>       <name>yarn.resourcemanager.hostname</name>       <value>hadoop.master</value>   </property>    <property>        <name>yarn.resourcemanager.resource-tracker.address</name>        <value>hadoop.master:8031</value>    </property>    <property>        <name>yarn.resourcemanager.scheduler.address</name>         <value>hadoop.master:8030</value>    </property>    <property>        <name>yarn.resourcemanager.address</name>        <value>hadoop.master:8032</value>    </property>    <property>        <name>yarn.nodemanager.local-dirs</name>        <value>${hadoop.tmp.dir}/nodemanager/local</value>    </property>    <property>        <name>yarn.nodemanager.address</name>        <value>0.0.0.0:8034</value>    </property>    <property>        <name>yarn.nodemanager.remote-app-log-dir</name>        <value>${hadoop.tmp.dir}/nodemanager/remote</value>    </property>    <property>        <name>yarn.nodemanager.log-dirs</name>        <value>${hadoop.tmp.dir}/nodemanager/logs</value>    </property>    <property>        <name>yarn.nodemanager.aux-services</name>        <value>mapreduce_shuffle</value>     </property></configuration>
8.配置masters
hadoop.master
9.配置slaves
hadoop.slave01hadoop.slave02
10.配置环境变量
export JAVA_HOME="/usr/lib/jvm/jre-1.7.0-openjdk.x86_64"export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/libexport PATH=$JAVA_HOME/bin:$JAVA_HOME/lib:$JAVA_HOME/jre/bin:$PATH:$HOME/binexport HADOOP_HOME=/home/hadoop/cloud/hadoopexport PATH=.:$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

三、启动、停止

1.格式化namenode
hadoop namenode -format
2. 启动(脚本在/homd/hadoop/cloud/hadoop/sbin目录)
start-dfs.shstart-yarn.sh

3.停止

stop-yarn.shstop-dfs.sh

4.用命令查看

hadoop.master

$ jps3885 SecondaryNameNode5497 Jps4070 ResourceManager3724 NameNode
hadoop.slave01

$jps3716 NodeManager4464 Jps3594 DataNode
hadoop.slave02
$jps3716 NodeManager4700 Jps3600 DataNode

<span style="font-family: Arial, Helvetica, sans-serif; background-color: rgb(255, 255, 255);">5. 用web ui 查看</span>

http://hadoop.master:8088


http://hadoop.master:50070



0 0
原创粉丝点击