dfsput操作报“There are 0 datanode(s) running and no node(s) are excluded in this operation”
来源:互联网 发布:淘宝平板版下载 编辑:程序博客网 时间:2024/06/08 07:36
. There are 0 datanode(s) running and no node(s) are excluded in this operation.
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
$bin/hdfs dfs -put etc/hadoop input 报如下错:
17/06/28 00:00:36 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/input/hadoop/capacity-scheduler.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
vi /usr/src/hadoop/logs/hadoop-root-datanode-centos128.log 查看日志,发现clusterIDs不兼容。想到重建的原因。故想删除历史文件
2017-06-28 00:48:21,085 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/data/in_use.lock acquired by nodename 2533@centos128
2017-06-28 00:48:21,089 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-root/dfs/data/
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-root/dfs/data: namenode clusterID = CID-00699a28-9ec0-4301-ae92-5a04cf302565; datanode clusterID = CID-3c15bdd3-4886-41ab-a3a8-d899d0b607b9
[root@centos128 hadoop]# sbin/stop-dfs.sh 清空/tmp下相关的文件
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
total 0
drwxr-xr-x. 3 root root 20 Jun 27 23:46 hadoop
drwxr-xr-x. 4 root root 31 Jun 27 23:46 hadoop-root
drwxr-xr-x. 2 root root 6 Jun 28 00:57 hsperfdata_root
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_0_0_0_0_50070_hdfs____w2cu08
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_0_0_0_0_50090_secondary____y6aanv
drwxr-xr-x. 3 root root 21 Jun 27 23:01 Jetty_localhost_33451_datanode____ihzion
drwxr-xr-x. 3 root root 21 Jun 28 00:19 Jetty_localhost_34384_datanode____gijfbp
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_localhost_38324_datanode____wwolqv
drwxr-xr-x. 3 root root 21 Jun 27 23:49 Jetty_localhost_40770_datanode____.cx9tc5
drwxr-xr-x. 3 root root 21 Jun 27 23:59 Jetty_localhost_43407_datanode____.hgiwfx
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
drwx------. 3 root root 17 Jun 28 00:27 systemd-private-fc429f3080ff45c493f8d0afc4a92699-vmtoolsd.service-j5KhNK
[root@centos128 hadoop]# cd /tmp
[root@centos128 tmp]# rm -rf hsp* Jett* systemd* hadopp*
[root@centos128 tmp]# ll
total 0
drwxr-xr-x. 3 root root 20 Jun 27 23:46 hadoop
drwxr-xr-x. 4 root root 31 Jun 27 23:46 hadoop-root
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
[root@centos128 tmp]# rm -rf hadoop*
[root@centos128 tmp]# ll
total 0
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
重新format namenode
bin/hdfs namenode -format
sbin/start-dfs.sh
Browse the web interface for the NameNode; by default it is available at:
NameNode - http://192.168.44.128:50070/
发现有一个live nodes ,并且Datanode Information 有信息
[root@centos128 hadoop]# bin/hdfs dfs -mkdir /user
[root@centos128 hadoop]# bin/hdfs dfs -mkdir /user/root
[root@centos128 hadoop]# bin/hdfs dfs -put etc/hadoop input
没有报错,正常。下面继续操作
[root@centos128 hadoop]# bin/hdfs dfs -ls
Found 2 items
drwxr-xr-x - root supergroup 0 2017-06-28 01:05 input
drwxr-xr-x - root supergroup 0 2017-06-28 01:06 output
[root@centos128 hadoop]# bin/hdfs dfs -ls output
Found 2 items
-rw-r--r-- 1 root supergroup 0 2017-06-28 01:06 output/_SUCCESS
-rw-r--r-- 1 root supergroup 220 2017-06-28 01:06 output/part-r-00000
[root@centos128 hadoop]# bin/hdfs dfs -ls input/hadoop
ls: `input/hadoop': No such file or directory
[root@centos128 hadoop]# bin/hdfs dfs -ls input
Found 29 items
-rw-r--r-- 1 root supergroup 4942 2017-06-28 01:05 input/capacity-scheduler.xml
-rw-r--r-- 1 root supergroup 1335 2017-06-28 01:05 input/configuration.xsl
-rw-r--r-- 1 root supergroup 318 2017-06-28 01:05 input/container-executor.cfg
-rw-r--r-- 1 root supergroup 884 2017-06-28 01:05 input/core-site.xml
-rw-r--r-- 1 root supergroup 3804 2017-06-28 01:05 input/hadoop-env.cmd
-rw-r--r-- 1 root supergroup 4696 2017-06-28 01:05 input/hadoop-env.sh
-rw-r--r-- 1 root supergroup 2490 2017-06-28 01:05 input/hadoop-metrics.properties
-rw-r--r-- 1 root supergroup 2598 2017-06-28 01:05 input/hadoop-metrics2.properties
-rw-r--r-- 1 root supergroup 9683 2017-06-28 01:05 input/hadoop-policy.xml
-rw-r--r-- 1 root supergroup 867 2017-06-28 01:05 input/hdfs-site.xml
-rw-r--r-- 1 root supergroup 1449 2017-06-28 01:05 input/httpfs-env.sh
-rw-r--r-- 1 root supergroup 1657 2017-06-28 01:05 input/httpfs-log4j.properties
-rw-r--r-- 1 root supergroup 21 2017-06-28 01:05 input/httpfs-signature.secret
-rw-r--r-- 1 root supergroup 620 2017-06-28 01:05 input/httpfs-site.xml
-rw-r--r-- 1 root supergroup 3518 2017-06-28 01:05 input/kms-acls.xml
-rw-r--r-- 1 root supergroup 1611 2017-06-28 01:05 input/kms-env.sh
-rw-r--r-- 1 root supergroup 1631 2017-06-28 01:05 input/kms-log4j.properties
-rw-r--r-- 1 root supergroup 5546 2017-06-28 01:05 input/kms-site.xml
-rw-r--r-- 1 root supergroup 13661 2017-06-28 01:05 input/log4j.properties
-rw-r--r-- 1 root supergroup 951 2017-06-28 01:05 input/mapred-env.cmd
-rw-r--r-- 1 root supergroup 1383 2017-06-28 01:05 input/mapred-env.sh
-rw-r--r-- 1 root supergroup 4113 2017-06-28 01:05 input/mapred-queues.xml.template
-rw-r--r-- 1 root supergroup 758 2017-06-28 01:05 input/mapred-site.xml.template
-rw-r--r-- 1 root supergroup 10 2017-06-28 01:05 input/slaves
-rw-r--r-- 1 root supergroup 2316 2017-06-28 01:05 input/ssl-client.xml.example
-rw-r--r-- 1 root supergroup 2697 2017-06-28 01:05 input/ssl-server.xml.example
-rw-r--r-- 1 root supergroup 2250 2017-06-28 01:05 input/yarn-env.cmd
-rw-r--r-- 1 root supergroup 4567 2017-06-28 01:05 input/yarn-env.sh
-rw-r--r-- 1 root supergroup 690 2017-06-28 01:05 input/yarn-site.xml
[root@centos128 hadoop]# bin/hdfs dfs -get output output
[root@centos128 hadoop]# cat output/*
cat: output/output: Is a directory
1 dfsadmin
$ bin/hdfs dfs -mkdir /user
$ bin/hdfs dfs -mkdir /user/<username>
$bin/hdfs dfs -put etc/hadoop input 报如下错:
17/06/28 00:00:36 WARN hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /user/root/input/hadoop/capacity-scheduler.xml._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1733)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:265)
vi /usr/src/hadoop/logs/hadoop-root-datanode-centos128.log 查看日志,发现clusterIDs不兼容。想到重建的原因。故想删除历史文件
2017-06-28 00:48:21,085 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /tmp/hadoop-root/dfs/data/in_use.lock acquired by nodename 2533@centos128
2017-06-28 00:48:21,089 WARN org.apache.hadoop.hdfs.server.common.Storage: Failed to add storage directory [DISK]file:/tmp/hadoop-root/dfs/data/
java.io.IOException: Incompatible clusterIDs in /tmp/hadoop-root/dfs/data: namenode clusterID = CID-00699a28-9ec0-4301-ae92-5a04cf302565; datanode clusterID = CID-3c15bdd3-4886-41ab-a3a8-d899d0b607b9
[root@centos128 hadoop]# sbin/stop-dfs.sh 清空/tmp下相关的文件
Stopping namenodes on [localhost]
localhost: stopping namenode
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
total 0
drwxr-xr-x. 3 root root 20 Jun 27 23:46 hadoop
drwxr-xr-x. 4 root root 31 Jun 27 23:46 hadoop-root
drwxr-xr-x. 2 root root 6 Jun 28 00:57 hsperfdata_root
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_0_0_0_0_50070_hdfs____w2cu08
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_0_0_0_0_50090_secondary____y6aanv
drwxr-xr-x. 3 root root 21 Jun 27 23:01 Jetty_localhost_33451_datanode____ihzion
drwxr-xr-x. 3 root root 21 Jun 28 00:19 Jetty_localhost_34384_datanode____gijfbp
drwxr-xr-x. 3 root root 21 Jun 28 00:48 Jetty_localhost_38324_datanode____wwolqv
drwxr-xr-x. 3 root root 21 Jun 27 23:49 Jetty_localhost_40770_datanode____.cx9tc5
drwxr-xr-x. 3 root root 21 Jun 27 23:59 Jetty_localhost_43407_datanode____.hgiwfx
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
drwx------. 3 root root 17 Jun 28 00:27 systemd-private-fc429f3080ff45c493f8d0afc4a92699-vmtoolsd.service-j5KhNK
[root@centos128 hadoop]# cd /tmp
[root@centos128 tmp]# rm -rf hsp* Jett* systemd* hadopp*
[root@centos128 tmp]# ll
total 0
drwxr-xr-x. 3 root root 20 Jun 27 23:46 hadoop
drwxr-xr-x. 4 root root 31 Jun 27 23:46 hadoop-root
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
[root@centos128 tmp]# rm -rf hadoop*
[root@centos128 tmp]# ll
total 0
drwxr-xr-x. 3 root root 16 Jun 9 06:34 pip-build-hv2v27iz
重新format namenode
bin/hdfs namenode -format
sbin/start-dfs.sh
Browse the web interface for the NameNode; by default it is available at:
NameNode - http://192.168.44.128:50070/
发现有一个live nodes ,并且Datanode Information 有信息
[root@centos128 hadoop]# bin/hdfs dfs -mkdir /user
[root@centos128 hadoop]# bin/hdfs dfs -mkdir /user/root
[root@centos128 hadoop]# bin/hdfs dfs -put etc/hadoop input
没有报错,正常。下面继续操作
[root@centos128 hadoop]# bin/hdfs dfs -ls
Found 2 items
drwxr-xr-x - root supergroup 0 2017-06-28 01:05 input
drwxr-xr-x - root supergroup 0 2017-06-28 01:06 output
[root@centos128 hadoop]# bin/hdfs dfs -ls output
Found 2 items
-rw-r--r-- 1 root supergroup 0 2017-06-28 01:06 output/_SUCCESS
-rw-r--r-- 1 root supergroup 220 2017-06-28 01:06 output/part-r-00000
[root@centos128 hadoop]# bin/hdfs dfs -ls input/hadoop
ls: `input/hadoop': No such file or directory
[root@centos128 hadoop]# bin/hdfs dfs -ls input
Found 29 items
-rw-r--r-- 1 root supergroup 4942 2017-06-28 01:05 input/capacity-scheduler.xml
-rw-r--r-- 1 root supergroup 1335 2017-06-28 01:05 input/configuration.xsl
-rw-r--r-- 1 root supergroup 318 2017-06-28 01:05 input/container-executor.cfg
-rw-r--r-- 1 root supergroup 884 2017-06-28 01:05 input/core-site.xml
-rw-r--r-- 1 root supergroup 3804 2017-06-28 01:05 input/hadoop-env.cmd
-rw-r--r-- 1 root supergroup 4696 2017-06-28 01:05 input/hadoop-env.sh
-rw-r--r-- 1 root supergroup 2490 2017-06-28 01:05 input/hadoop-metrics.properties
-rw-r--r-- 1 root supergroup 2598 2017-06-28 01:05 input/hadoop-metrics2.properties
-rw-r--r-- 1 root supergroup 9683 2017-06-28 01:05 input/hadoop-policy.xml
-rw-r--r-- 1 root supergroup 867 2017-06-28 01:05 input/hdfs-site.xml
-rw-r--r-- 1 root supergroup 1449 2017-06-28 01:05 input/httpfs-env.sh
-rw-r--r-- 1 root supergroup 1657 2017-06-28 01:05 input/httpfs-log4j.properties
-rw-r--r-- 1 root supergroup 21 2017-06-28 01:05 input/httpfs-signature.secret
-rw-r--r-- 1 root supergroup 620 2017-06-28 01:05 input/httpfs-site.xml
-rw-r--r-- 1 root supergroup 3518 2017-06-28 01:05 input/kms-acls.xml
-rw-r--r-- 1 root supergroup 1611 2017-06-28 01:05 input/kms-env.sh
-rw-r--r-- 1 root supergroup 1631 2017-06-28 01:05 input/kms-log4j.properties
-rw-r--r-- 1 root supergroup 5546 2017-06-28 01:05 input/kms-site.xml
-rw-r--r-- 1 root supergroup 13661 2017-06-28 01:05 input/log4j.properties
-rw-r--r-- 1 root supergroup 951 2017-06-28 01:05 input/mapred-env.cmd
-rw-r--r-- 1 root supergroup 1383 2017-06-28 01:05 input/mapred-env.sh
-rw-r--r-- 1 root supergroup 4113 2017-06-28 01:05 input/mapred-queues.xml.template
-rw-r--r-- 1 root supergroup 758 2017-06-28 01:05 input/mapred-site.xml.template
-rw-r--r-- 1 root supergroup 10 2017-06-28 01:05 input/slaves
-rw-r--r-- 1 root supergroup 2316 2017-06-28 01:05 input/ssl-client.xml.example
-rw-r--r-- 1 root supergroup 2697 2017-06-28 01:05 input/ssl-server.xml.example
-rw-r--r-- 1 root supergroup 2250 2017-06-28 01:05 input/yarn-env.cmd
-rw-r--r-- 1 root supergroup 4567 2017-06-28 01:05 input/yarn-env.sh
-rw-r--r-- 1 root supergroup 690 2017-06-28 01:05 input/yarn-site.xml
[root@centos128 hadoop]# bin/hdfs dfs -get output output
[root@centos128 hadoop]# cat output/*
cat: output/output: Is a directory
1 dfsadmin
阅读全文
0 0
- dfsput操作报“There are 0 datanode(s) running and no node(s) are excluded in this operation”
- There are 0 datanode(s) running and no node(s) are excluded in this operation
- There are 0 datanode(s) running and no node(s) are excluded in this operation.
- There are 0 datanode(s) running and no node(s) are excluded in this operation
- There are 1 datanode(s) running and no node(s) are excluded in this operation.错误
- (Hadoop datanode 问题)There are 0 datanode(s) running and no node(s) are excluded in this operation
- hadoop 异常 There are 0 datanode(s) running
- could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running
- could only be replicated to 0 nodes instead of minReplication (=1). There are 4 datanode(s) running
- There are no rules in this stylesheet
- File/ could only be replicated to 0 nodes instead ofminRepLication (=1) There are 0 datanode(s) run
- maven build启动服务报错No compiler is provided in this environment. Perhaps you are running on a JRE
- maven编译报No compiler is provided in this environment. Perhaps you are running on a JRE rather than a
- centos下maven编译 mvn package 报错:No compiler is provided in this environment. Perhaps you are running
- How many '1's are there
- There are no datanodes in the cluster
- There are no datanodes in the cluster
- Why there are no job running on hadoop
- 杂谈0627
- 单行函数
- mac本安装mysql&&忘记密码后如何进行修改
- RandomAccessFile的编码转换问题
- javascript逻辑运算符“||”和“&&”
- dfsput操作报“There are 0 datanode(s) running and no node(s) are excluded in this operation”
- URG_PSH区别
- java学习笔记1——面向对象和JVM基础
- pthread_exit-----在linux主线程中的用途
- 【翻译】Faster-R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
- 端午个人赛-A return of Nim (Nim+威左夫,拓:巴什)
- MediaStore.Audio.Media属性(本地媒体库)
- Android常用开源项目(二十八)
- nginx配合modsecurity实现WAF功能