Incompatible namespaceIDs

来源:互联网 发布:rush鼻吸器淘宝 编辑:程序博客网 时间:2024/05/19 22:44
在启动hadoop的时候使用jps查看datanode节点上的服务并没有起来,此时去查看日志发现:
2012-06-07 09:41:37,812 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-06-07 09:41:37,850 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-06-07 09:41:38,080 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-06-07 09:41:39,556 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 0 time(s).
2012-06-07 09:41:40,559 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 1 time(s).
2012-06-07 09:41:41,562 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 2 time(s).
2012-06-07 09:41:42,565 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 3 time(s).
2012-06-07 09:41:43,568 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 4 time(s).
2012-06-07 09:41:46,184 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/cheng/hadoop/data/data1: namenode namespaceID = 875533609; datanode namespaceID = 1665404807


解决:
下面给出两种解决办法,我使用的是第二种。
Workaround 1: Start from scratch
I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:
1.     stop the cluster
2.     delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3.     reformat the namenode (NOTE: all HDFS data is lost during this process!)
4.     restart the cluster
When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematic datanodes
Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes:
1.     stop the datanode
2.     edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode
3.     restart the datanode
If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like, here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13
 
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下 的所有目录.
 
原创粉丝点击