DateNode没有启动解决办法

来源:互联网 发布:汪峰 知乎 编辑:程序博客网 时间:2024/05/18 00:41

查看日志如下:

2017-07-09 08:45:52,209 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2017-07-09 08:45:52,577 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2017-07-09 08:45:52,762 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2017-07-09 08:45:52,832 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2017-07-09 08:45:52,832 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2017-07-09 08:45:52,835 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is slave1
2017-07-09 08:45:52,843 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2017-07-09 08:45:52,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2017-07-09 08:45:52,868 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2017-07-09 08:45:52,868 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2017-07-09 08:45:52,928 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-07-09 08:45:52,932 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2017-07-09 08:45:52,944 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-07-09 08:45:52,946 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2017-07-09 08:45:52,946 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-07-09 08:45:52,946 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-07-09 08:45:52,962 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2017-07-09 08:45:52,969 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
2017-07-09 08:45:52,969 INFO org.mortbay.log: jetty-6.1.26
2017-07-09 08:45:53,175 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50075
2017-07-09 08:45:53,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root
2017-07-09 08:45:53,322 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2017-07-09 08:45:53,354 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2017-07-09 08:45:53,369 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2017-07-09 08:45:53,414 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2017-07-09 08:45:53,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2017-07-09 08:45:53,438 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2017-07-09 08:45:53,452 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to master1/192.168.24.155:8000 starting to offer service
2017-07-09 08:45:53,457 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2017-07-09 08:45:53,459 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2017-07-09 08:45:53,868 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/app/hdpdata/dfs/data/in_use.lock acquired by nodename 2581@slave1
2017-07-09 08:45:53,870 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /usr/app/hdpdata/dfs/data: namenode clusterID = CID-dd984253-7fa7-4d68-8613-a17cbcea11cd; datanode clusterID = CID-eedde44b-fcde-4ee8-8cc2-0de5f4923870
2017-07-09 08:45:53,871 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to master1/192.168.24.155:8000. Exiting. 
java.io.IOException: All specified directories are failed to load.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:478)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1338)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1304)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:226)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:867)
at java.lang.Thread.run(Thread.java:748)
2017-07-09 08:45:53,873 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to master1/192.168.24.155:8000
2017-07-09 08:45:53,976 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-07-09 08:45:55,977 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-07-09 08:45:55,983 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-07-09 08:45:55,988 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at slave1/192.168.24.129

************************************************************/

分析日志:

2017-07-09 08:45:53,868 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /usr/app/hdpdata/dfs/data/in_use.lock acquired by nodename 2581@slave1
2017-07-09 08:45:53,870 WARN org.apache.hadoop.hdfs.server.common.Storage: java.io.IOException: Incompatible clusterIDs in /usr/app/hdpdata/dfs/data: namenode clusterID = CID-dd984253-7fa7-4d68-8613-a17cbcea11cd; datanode clusterID = CID-eedde44b-fcde-4ee8-8cc2-0de5f4923870

可以看出问题是namenode的clusterID和datanode的clusterID 不匹配问题,根据提示路径将namenode的clusterID 复制到datanode的clusterID 就可以解决问题了。



原创粉丝点击