搭建Hadoop full distributed cluster步骤

来源:互联网 发布:头戴式耳机 知乎 编辑:程序博客网 时间:2024/05/29 15:19

前言:

 今天学习了hadoop in action的 chapter 2.3 于是准备搭建一个full distributed hadoop cluster, 按照书上那样做了下来,但是还是碰到了一堆问题,最后在一个一个问题的解决下,终于成功。为了防止后来的同仁,走弯路,此处记录下来步骤及相应的注意事项;

在此,对该篇博文的作者表示感谢,http://blog.csdn.net/a15039096218/article/details/7832152, 谢谢你的成果,让我找到了错误的原因;


环境: 

JDK 1.6.0_30

Hadoop: 1.2.1


1  配置文件

A) xml 配置

使用书上默认的配置文件即可,根据自己的namenode 机器的hostname修改配置文件。 最好拷贝树上的然后适当改下,我开始就时自己写,总是报错误1。

core-site.xml

<?xml version=”1.0”?><?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?><!-- Put site-specific property overrides in this file. --><configuration><property><name>fs.default.name</name><value>hdfs://master:9000</value><description>The name of the default file system. A URI whosescheme and authority determine the FileSystem implementation.</description></property></configuration>

hdfs-site.xml

<?xml version=”1.0”?><?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?><!-- Put site-specific property overrides in this file. --><configuration><property><name>dfs.replication</name><value>3</value><description>The actual number of replications can be specified when thefile is created.</description></property></configuration>


mapred-site.xml

<?xml version=”1.0”?><?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?><!-- Put site-specific property overrides in this file. --><configuration><property><name>mapred.job.tracker</name><value>master:9001</value><description>The host and port that the MapReduce job tracker runs at.</description></property></configuration>


B)  masters, slaves文件

masters: 把second name node 的hostname放入;

slaves: 把data node的hostname放入;


2  ssh 及hostname相关准备

A) ssh 免登录设置:

    使用《hadoop in action》上的设置,基本上没有问题,若是还有问题,检查.ssh 文件夹 chmod 700, authorizedkeys chmod 600;

    建立name node 机器到各个节点(second name node, data node)的免登录;


B) hostname的准备

     在各个节点上,配置自己和name node的hostname对应的ip;  在/etc/hosts文件里配置;

     例如: hadoop1节点 ip为 192.16.1.101, master ip为 192.16.1.100;则需要在hadoop1节点上配置:

      192.16.1.100 master

      192.16.1.101 hadoop1

    两个映射;

   master节点上,配置hostname 到各个节点的ip映射;

   若没有配置好,容易报错误2;


C) iptables 放行 9000端口

    增加放行9000端口的配置规则:

     iptables -I INPUT 1 -p tcp -m state --state NEW -m tcp --dport 9000 -j ACCEPT

     service iptables save

     当然也可已使用简单粗暴的方式就是 service iptables stop或者iptables -F 暂时禁用iptables;


    若是这步没做,容易报错误3;

    这时master 的 name node 进程和job tracker进程都启动成功,但是所有 slave节点的datanode和tasktracker进程都启动成成功;


3 namendoe -format

   前两步完成之后,记得 $HADOOP_HOME/bin hadoop namenode -format, 若是忘了这个步骤,则会报错误4;


4  start-all.sh

    完成前三步之后,即可使用$HADOOP_HOME/bin start-all.sh启动,然后检查各个节点是否有对应的进程即可。



常见错误

错误1: Does not contain a valid host:port authority: file:///

ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.IllegalArgumentException: Does not contain a valid host:port authority: file:///        at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)        at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:280)        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

错误原因: 在于配置文件,自己手写时少了<property>属性。若是碰到这个问题,请检查配置文件的属性是否正确。


错误2:org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.UnknownHostException

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.net.UnknownHostException: hadoop1: hadoop1        at java.net.InetAddress.getLocalHost(InetAddress.java:1360)        at org.apache.hadoop.security.SecurityUtil.getLocalHostName(SecurityUtil.java:271)        at org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:289)        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:313)        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
出现这个错误,则是/etc/hosts中对应的节点的hostname没有映射相应的ip地址,修改映射即可。


错误3: NoRouteToHostException: No route to host

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Call to hadoop1/192.168.1.101:9000 failed on local exception: java.net.NoRouteToHostException: No route to host        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)        at org.apache.hadoop.ipc.Client.call(Client.java:1118)        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)        at $Proxy5.getProtocolVersion(Unknown Source)        at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:414)        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:392)        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:374)        at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:453)        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:335)        at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:300)        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)Caused by: java.net.NoRouteToHostException: No route to host        at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)        at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:567)        at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)        at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)        at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:457)        at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:583)        at org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)        at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)        at org.apache.hadoop.ipc.Client.call(Client.java:1093)        ... 16 more


这个一般出现在datanode, 出现这个异常,原因在于不能访问master的9000端口,原因可能时iptables防火墙拦截,或者是master的namenode没有启动成功(看namenode的启动日志可以知道启动成功与否);


错误4:  org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.

ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-lscm/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)2014-07-10 18:50:59,385 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /tmp/hadoop-lscm/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible.        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:304)        at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:104)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:427)        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:395)        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:299)        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:569)        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1479)        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1488)

报这个错误,原因在于namenode 没有格式化, 使用 $HADOOP_HOME/bin hadoop namenode -format 格式化即可。


错误5 :Incompatible namespaceIDs in /tmp/hadoop-lscm/dfs/data: namenode namespaceID = 1585693735; datanode namespaceID = 1804912251
       

ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /tmp/hadoop-lscm/dfs/data: namenode namespaceID = 1585693735; datanode namespaceID = 1804912251        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)        at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:414)        at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:321)        at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)        at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)        at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)        at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)        at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)

原因:

Your Hadoop namespaceID became corrupted. Unfortunately the easiest thing to do reformat the HDFS.

解决方案:
You need to do something like this:
bin/stop-all.sh 
rm -Rf /tmp/hadoop-your-username/* 
bin/hadoop namenode -format

删除数据时,记得删除所有数据节点上的数据。



参考文章:

hadoop启动错误总结 http://blog.csdn.net/a15039096218/article/details/7832152

Hadoop初体验——问题总结  http://www.cnblogs.com/hustcat/archive/2010/06/30/1768506.html


0 0