hadoop集群环境安装 遇到的问题汇总

来源:互联网 发布:yum命令无法使用 编辑:程序博客网 时间:2024/05/17 22:05

1.日志报的异常:

org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /usr/hadoop/tmp/dfs/data: namenode namespaceID = 115
544124; datanode namespaceID = 506519610

at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:232)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:147)
at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:385)
at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:299)
at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1582)
at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1521)
at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1539)
at org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
at org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

由于datanode比较多,且各个datanode之间的namespace id一般不会错,

所以一般来说,是修改namonode上面的namespace id。先查看某个datanode下面的namenode id 信息,比如我的默认fs路径是/usr/hadoop,那么就到datanode下面/usr/hadoop/tmp/dfs/data/current/下面查看VERSION文件,内容如下:#Tue Jul 31 17:31:22 JST 2012namespaceID=590008784storageID=DS-230267979-192.168.3.209-50010-1342056014871cTime=0然后查看namenode下面的
/usr/hadoop/tmp/dfs/name/current/VERSION文件,格式如下:
#Fri Aug 03 15:36:51 JST 2012namespaceID=590008784cTime=0storageType=NAME_NODElayoutVersion=-18如果两者namespaceID值不一样的话,修改namenode中的namespaceID值为datanode中的namespaceID值,保持一致即可。然后重新启动集群。
2.日志报的异常:
/home/Hadoop/hadoop/bin/../bin/hadoop-daemon.sh: line 127: /tmp/hadoop-hadoop-namenode.pid: 权限不够
供大家参考,希望能帮助到遇到同样问题的童鞋。

在hadoop用户下,查看/tmp文件:
-rw-r--r--  1 root   root       5 12-07 13:18 hadoop-hadoop-jobtracker.pid
-rw-r--r--  1 root   root       5 12-07 13:18 hadoop-hadoop-namenode.pid
-rw-rw-r--  1 hadoop hadoop     5 12-11 18:38 hadoop-hadoop-secondarynamenode.pid
-rw-r--r--  1 root   root       5 12-07 13:55 hadoop-root-datanode.pid
-rw-r--r--  1 root   root       5 12-07 13:55 hadoop-root-jobtracker.pid
-rw-r--r--  1 root   root       5 12-07 13:55 hadoop-root-namenode.pid
-rw-r--r--  1 root   root       5 12-07 13:55 hadoop-root-secondarynamenode.pid
-rw-r--r--  1 root   root       5 12-07 13:55 hadoop-root-tasktracker.pid

start时有权限问题,至于原理现在我不清楚,是在start的时候需要对hadoop-hadoop-namenode.pid进行修改?还是什么?请高手解答。

解决办法:
1。在hadoop-config中修改hadoop-env.sh,添加:export HADOOP_PID_DIR=$HADOOP_HOME/run/tmp。改变pid的路径。3台机子一块改。
2。在/etc中修改profile,添加:export HADOOP_PID_DIR=$HADOOP_HOME/run/tmp,同样,改3台机子。
3。重启之后:
[hadoop@hadoop1 hadoop]$ bin/start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-mapred.sh
starting namenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-namenode-hadoop1.out
hadoop3: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop3.out
hadoop2: starting datanode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-datanode-hadoop2.out
hadoop1: starting secondarynamenode, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-secondarynamenode-hadoop1.out
starting jobtracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-jobtracker-hadoop1.out
hadoop3: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop3.out
hadoop2: starting tasktracker, logging to /home/hadoop/hadoop/logs/hadoop-hadoop-tasktracker-hadoop2.out

是否从新format都无影响,显示正常!
3.日志报的异常:

ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:java.io.IOException: File /usr/hadoop_dir/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

2013-08-20 10:36:22,718 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 49000, call addBlock(/usr/hadoop_dir/tmp/mapred/system/jobtracker.info, DFSClient_NONMAPREDUCE_1570390041_1, null) from 192.168.2.99:56700: error: java.io.IOException: File /usr/hadoop_dir/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

java.io.IOException: File /usr/hadoop_dir/tmp/mapred/system/jobtracker.info could only be replicated to 0 nodes, instead of 1

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1920)

at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:783)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:587)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1432)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1428)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1426)

 

解决方案:

1、将masters与slaves中的主机配置为IP地址。

2、网上也有说防火墙没有关,也请检查一下。

3、重新格式化namenode hadoop namenode -format,并检查version文件中的ID

4、检查core-stite.xml mapred-site.xml 文件地址换成IP

5、检查相关日志,查看错误信息

6、注意datanode目录权限一定是 755

7、也有可能是java的bug引起的


原创粉丝点击