Incompatible namespaceIDs
来源:互联网 发布:rush鼻吸器淘宝 编辑:程序博客网 时间:2024/05/19 22:44
在启动hadoop的时候使用jps查看datanode节点上的服务并没有起来,此时去查看日志发现:
2012-06-07 09:41:37,812 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-06-07 09:41:37,850 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-06-07 09:41:38,080 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-06-07 09:41:39,556 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 0 time(s).
2012-06-07 09:41:40,559 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 1 time(s).
2012-06-07 09:41:41,562 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 2 time(s).
2012-06-07 09:41:42,565 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 3 time(s).
2012-06-07 09:41:43,568 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 4 time(s).
2012-06-07 09:41:46,184 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/cheng/hadoop/data/data1: namenode namespaceID = 875533609; datanode namespaceID = 1665404807
解决:
下面给出两种解决办法,我使用的是第二种。
Workaround 1: Start from scratch
I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:
1. stop the cluster
2. delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3. reformat the namenode (NOTE: all HDFS data is lost during this process!)
4. restart the cluster
When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematic datanodes
Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes:
1. stop the datanode
2. edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode
3. restart the datanode
If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like, here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下 的所有目录.
2012-06-07 09:41:37,812 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-06-07 09:41:37,850 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2012-06-07 09:41:37,852 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2012-06-07 09:41:38,080 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-06-07 09:41:39,556 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 0 time(s).
2012-06-07 09:41:40,559 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 1 time(s).
2012-06-07 09:41:41,562 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 2 time(s).
2012-06-07 09:41:42,565 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 3 time(s).
2012-06-07 09:41:43,568 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: ip83/192.168.7.83:49000. Already tried 4 time(s).
2012-06-07 09:41:46,184 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /home/cheng/hadoop/data/data1: namenode namespaceID = 875533609; datanode namespaceID = 1665404807
解决:
下面给出两种解决办法,我使用的是第二种。
Workaround 1: Start from scratch
I can testify that the following steps solve this error, but the side effects won't make you happy (me neither). The crude workaround I have found is to:
1. stop the cluster
2. delete the data directory on the problematic datanode: the directory is specified by dfs.data.dir in conf/hdfs-site.xml; if you followed this tutorial, the relevant directory is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data
3. reformat the namenode (NOTE: all HDFS data is lost during this process!)
4. restart the cluster
When deleting all the HDFS data and starting from scratch does not sound like a good idea (it might be ok during the initial setup/testing), you might give the second approach a try.
Workaround 2: Updating namespaceID of problematic datanodes
Big thanks to Jared Stehler for the following suggestion. I have not tested it myself yet, but feel free to try it out and send me your feedback. This workaround is "minimally invasive" as you only have to edit one file on the problematic datanodes:
1. stop the datanode
2. edit the value of namespaceID in <dfs.data.dir>/current/VERSION to match the value of the current namenode
3. restart the datanode
If you followed the instructions in my tutorials, the full path of the relevant file is /usr/local/hadoop-datastore/hadoop-hadoop/dfs/data/current/VERSION (background: dfs.data.dir is by default set to ${hadoop.tmp.dir}/dfs/data, and we set hadoop.tmp.dir to /usr/local/hadoop-datastore/hadoop-hadoop).
If you wonder how the contents of VERSION look like, here's one of mine:
#contents of <dfs.data.dir>/current/VERSION
namespaceID=393514426
storageID=DS-1706792599-10.10.10.1-50010-1204306713481
cTime=1215607609074
storageType=DATA_NODE
layoutVersion=-13
原因:每次namenode format会重新创建一个namenodeId,而tmp/dfs/data下包含了上次format下的id,namenode format清空了namenode下的数据,但是没有晴空datanode下的数据,导致启动时失败,所要做的就是每次fotmat前,清空tmp一下 的所有目录.
- Incompatible namespaceIDs
- Incompatible namespaceIDs
- Incompatible namespaceIDS 错误解决办法
- HADOOP报错Incompatible namespaceIDs
- HADOOP报错Incompatible namespaceIDs
- HADOOP报错Incompatible namespaceIDs .
- HADOOP报错Incompatible namespaceIDs
- HADOOP报错Incompatible namespaceIDs
- Hadoop name -format后Incompatible namespaceIDS 错误解决办法
- ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Incompatible namespaceIDs
- hadoop安装过程出现Incompatible namespaceIDs 错误解决方案
- hadoop启动出现异常 java.io.IOException: Incompatible namespaceIDs in
- Hadoop name -format后Incompatible namespaceIDS 错误解决办法
- 遇到问题---Hadoop---java.io.IOException: Incompatible namespaceIDs
- hadoop启动jobtracker时错误java.io.IOException: Incompatible namespaceIDs的解决方法
- Incompatible namespaceIDs或连接被对端重置异常的解决
- org.apache.hadoop.hdfs.server.datanode.DataNode: java.io.IOException: Incompatible namespaceIDs in /
- DataNode: java.io.IOException: Incompatible namespaceIDs in /dfs/dfs/data: namenode namespaceID = 69
- 心智模型四剑客 之 MEC与攀梯术
- 总结android各个版本的区别--API
- linux mac command
- [转]association,aggregation, composition 区别
- ZigBee无线协议学习笔记(2)
- Incompatible namespaceIDs
- MySQL InnoDB之事务与锁详解
- Java实现MD5加密
- PHP中global与$GLOBALS['']区别
- 应用程序要指定默认语言
- Android startActivity()
- linux下获取本机IP
- android工程中引入第三方JAR包后安装APK时老是提示找不到库文件 || Android如何将程序打成jar包 || 运行java.lang.noclassdeffounderror错误
- MySQL的锁定读SELECT ... FOR UPDATE和SELECT ... LOCK IN SHARE MODE