hadoop全分布模式的详细操作

来源:互联网 发布:excel2010软件下载 编辑:程序博客网 时间:2024/05/20 13:40

运行环境:

hadoop单机模式运行的基础上

ubuntu

1台master机器(作为namenode、jobtracker来使用),2台slave机器(作为HDFS的datanode、Map/Reduce的tasknode来使用)

集群的配置主要是三个方面:

1.etc/hosts 的配置

2.三台机中的五个hadoop conf中的文件配置相同:core-site.xml,hdfs-site.xml, mapred-site.xml, masters, slaves

3.保证三台机的ssh通信畅通。

详细操作:

1./etc/hosts文件中

设定192.168.1.3为master机,其fullydomain name是hadoop-master.test.com,simplename是hadoop-master。
设定192.168.1.4为slave机,其fullydomain name是hadoop-s1.test.com,simplename是hadoop-s1。
设定192.168.1.5为slave机,其fullydomain name是hadoop-s2.test.com,simplename是hadoop-s2。

hosts文件:
127.0.0.1      localhost.localdomain localhost
::1            localhost6.localdomain6 localhost6
192.168.1.3    hadoop-master.test.com hadoop-master
192.168.1.4    hadoop-s1.test.com      hadoop-s1
192.168.1.5    hadoop-s2.test.com      hadoop-s2
2.五个配置文件:

<span style="font-size:14px;">core-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl"href="configuration.xsl"?><configuration>       <property>              <name>fs.default.name</name>              <value>hdfs://hadoop-master:9000</value>       </property></configuration>hdfs-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl"href="configuration.xsl"?><configuration>       <property>              <name>dfs.data.dir</name>              <value>/usr/hadoop-0.20.2/dfs.data</value>       </property>       <property>              <name>dfs.name.dir</name>              <value>/usr/hadoop-0.20.2/dfs.name</value>       </property>       <property>              <name>dfs.replication</name>              <value>2</value>       </property></configuration>mapred-site.xml<?xml version="1.0"?><?xml-stylesheet type="text/xsl"href="configuration.xsl"?><configuration>       <property>              <name>mapred.job.tracker</name>              <value>hadoop-master:9001</value>       </property></configuration>mastershadoop-masterslaveshadoop-s1hadoop-s2</span>
上面5个配置文件里面都要用机器名(IP地址应该也行,但不能有localhost、127.0.0.1之类),因为这是hadoop集群的配置。HDFS的主控机是hadoop-master(192.168.1.3),端口9000,包括2个数据节点(192.168.1.4hadoop-s1, 192.168.1.5 hadoop-s2)。Map/Reduce的作业管理机hadoop-master(端口9001)。3个机器里面的这5个文件是相同的
3.ssh配置:

3台机器之间,必须能用空口令互相ssh通,因为hadoop的master机是通过ssh控制slave机器的。

ssh在集群所有节点安装之后,要使用主节点的ssh-keygen来生成RSA密钥对。务必要避免输入口令,不然主节点每次访问其他节点,都得手动输入口令。

用ssh-keygen –t rsa命令建立空口另秘钥对(在目录/home/hadoop/.ssh下面),然后cpid_rsa.pub authorized_key把空口令公钥文件复制到authorized_keys文件中。
然后合并3台机器的authorized_keys文件,合并后该文件中应该有3行公钥了,类似下面
[hadoop@hadoop-master .ssh]$ cat authorized_keys
ssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEA1Yk16312rEoaNM7YUF9+XZp5Ug20lLzm/5jaaanCAMEkJTNZN3AtEE9kXyAJ25XeUNJtIqaJQ3Bw53EPvN1D02Xjt9k8LmAppHF4LZl5blXIPe8Ppeammq8z9LgK/1NDU4fbpwqLL8yvuMSLPL7JYKCDPfeWCE+LlNi9ryB/6SYBJNfzFagcutQ/yAHDkquGj9EvrrE70dvhMG48ltmCiRFmCf9UXBFyczGuYVJnl9GuvmzSU85JK+Xx4/EUphA7wjvlCwO74qMAS7x2BZYSDyJiUgMFrQ6Od+tHFwHuJu7gRSyB4oG/LcwOyiCHzCbAuzFQRNT6GoGplrkT0I9amw==hadoop@hadoop-master.test.comssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEA0dw73huLc9VtHTU4fMLNquFN82lTadhBQXWxJ1o1r0Ek1ILEWgssYp33SjJ/FlfnpvCYpJGufSWXmJEmrpBgMFwc2DAXxJp0uzEvdBXgcpy/ljghUXcGbgLa7mBZ7AypxDYnvQSfER6/SF5I3FKuvDorqPfLRpdyG6N5klzolxmdfEJSv4ZczoAwGhV83CorMa4MoffFew0TdXDHqD6eihG7rhDVAstoM4SEuFW2rzHTKt3GEOWmViHXtNqNRAmHoNeX6q4a9NL8+OiqCaXAh3hZ6txNRpU1X1HotPFjpL8MQLs6dcOVmPWRnxGZr6grhE2WtnwdvsE3l4AmHYmqTw==hadoop@hadoop-s1.test.comssh-rsaAAAAB3NzaC1yc2EAAAABIwAAAQEAu1H6ySfXFNQYV2oKeUX/m5L3/3gYkTf0n+bddE0HFMsED7cV9wlCmDL71uX68kqbhVpR29L9/cwbsSeBL1lQSJENKLD4HgzfIOzPg93gx1OEnzAWLHrd/+uXchuj9kVnSY0qHmmbsQodbl3MTnEhayaHz8uWLJ6sPXk8yc+SFjKSqugpVmQXES1JhyoY1o9ZNNwaHgGOXwxNhFOdIOeHjCOzU0ug7NhjyORk8Oz9JALResq6YZOw2ZcvvtnieoikQP8AQ4SxymbWO4+c/x+tlhsdJow2qvl/uw2Y+iOgDIgOX7LsJBtZ1sI0BOBrj3DlK1cTPpK7KU1c0NTIqNwevQ==hadoop@hadoop-s2.test.com


然后把合并后的文件放到3台机器的/home/hadoop/.ssh目录下。
最后,验证一下3台机器的空口令ssh是否可以互通,比如在hadoop-master机器上sshhadoop-s1, ssh hadoop-s1等。

验证:
在hadoop-master(192.168.1.3)机器上格式化HDFS系统,命令hadoopnamenode -format。看见显示successful就说明格式化成功了。
然后启动hadoop集群,命令是start-all.sh(在/usr/hadoop-0.20.2/bin目录下)。可以看见显示了类似如下的日志
[hadoop@hadoop-master ~]$ start-all.sh

starting namenode, logging to/usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-namenode-hadoop-master.test.com.out
hadoop-s2: starting datanode, logging to/usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop-s2.test.com.out
hadoop-s1: starting datanode, logging to/usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-datanode-hadoop-s1.test.com.out
hadoop-master: starting secondarynamenode, loggingto /usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-secondarynamenode-hadoop-master.test.com.out
starting jobtracker, logging to/usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-jobtracker-hadoop-master.test.com.out
hadoop-s2: starting tasktracker, logging to/usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop-s2.test.com.out
hadoop-s1: starting tasktracker, logging to/usr/hadoop-0.20.2/bin/../logs/hadoop-hadoop-tasktracker-hadoop-s1.test.com.out

然后用/usr/java/jdk1.6.0_18/bin/jps命令看一下后台的java进程运行情况,如下
[hadoop@hadoop-master ~]$ jps
14558 NameNode
14788 JobTracker
14888 Jps
14715 SecondaryNameNode
说明在hadoop-master上已经启动了hadoop集群的namenode和jobtracker。再访问一下http://hadoop-master:50070看一下HDFS是否启动成功,如果看见有2个LiveNode而且日志中没有异常就说明HDFS成功了。
再访问一下http://hadoop-master:50030看一下map/reduce是否启动成功,如果看见Nodes是2个说明成功了。

在hadoop-master机器的/usr/hadoop-0.20.2目录下启动一个hadoop作业(job)来验证HDFS和Map/Reduce能协同工作。步骤如下:
$hadoop fs –mkdir /input #在HDFS系统的根目录下新建子目录input。
$hadoop fs –put conf/*.xml /input #将本地子目录conf下的xml文件复制到HDFS系统/input目录下。
$hadoop jar hadoop-0.20.2-examples.jar wordcount/input /output       #启动hadoop自带examples包中的wordcount作业,输入文件在HDFS的/input目录下,作业结果写到HDFS的/output目录中。作业过程可以在hadoop-master:50030端口监控。作业结束后,使用命令
$hadoop fs –ls /output看一下HDFS的/output目录的情况,里面应该有一个part*文件是结果。



0 0