namenode和datanode无法启动问题

来源:互联网 发布:淘宝食品新规 编辑:程序博客网 时间:2024/05/20 20:44

datanode无法启动(All directories in dfs.data.dir are invalid)

 查看日志:

root@hadoop logs]# more hadoop-root-datanode-hadoop.log 2015-05-18 05:39:01,932 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: /************************************************************STARTUP_MSG: Starting DataNodeSTARTUP_MSG:   host = hadoop/192.168.80.99STARTUP_MSG:   args = [] STARTUP_MSG:   version = 1.1.2STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1440782; compiled by 'hortonfo' on Thu Jan 31 02:03:24 UTC 2013************************************************************/2015-05-18 05:39:03,083 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties2015-05-18 05:39:03,145 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.2015-05-18 05:39:03,172 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).2015-05-18 05:39:03,173 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started2015-05-18 05:39:03,811 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.2015-05-18 05:39:03,998 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in dfs.data.dir: Incorrect permission for /usr/local/hadoop/tmp/dfs/data, expected: rwxr-xr-x, while actual: rwxrwxrwx2015-05-18 05:39:03,998 <span style="color:#ff0000;">ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in dfs.data.dir are invalid</span>.2015-05-18 05:39:03,998 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode2015-05-18 05:39:04,002 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************SHUTDOWN_MSG: Shutting down DataNode at hadoop/192.168.80.99************************************************************/</span>

出现问题的原因:

多了用户组的写权限能造成集群系统的无法启动。

处理方式:

[root@hadoop hadoop]# chmod 755 /usr/local/hadoop/tmp/dfs/data[root@hadoop hadoop]# start-all.sh 


0 0