hadoop群集get文件的时候出现报错
来源:互联网 发布:阿里云服务器入门 编辑:程序博客网 时间:2024/05/16 19:15
今天hadoop群集get文件的时候出现报错,如下:
$hdfs dfs -get /test/part-r-00000.gz ./
15/09/15 09:20:33 INFO hdfs.DFSClient: Access token was invalid when connecting to /192.168.2.42:50010 : org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error for OP_READ_BLOCK, self=/192.168.2.12:51440, remote=/192.168.2.42:50010, for file /test/part-r-00000.gz, for pool BP-9392391-192.168.2.101-1404293177278 block 1121463064_48718312
15/09/15 09:20:33 WARN hdfs.DFSClient: Failed to connect to /192.168.2.42:50010 for block, add to deadNodes and continue. org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error for OP_READ_BLOCK, self=/192.168.2.12:51441, remote=/192.168.2.42:50010, for file /test/part-r-00000.gz, for pool BP-9392391-192.168.2.101-1404293177278 block 1121463064_48718312
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error for OP_READ_BLOCK, self=/192.168.2.12:51441, remote=/192.168.2.42:50010, for file /test/part-r-00000.gz, for pool BP-9392391-192.168.2.101-1404293177278 block 1121463064_48718312
at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:425)
at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:397)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:786)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:665)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:325)
获取不到Block文件,后来发现有人大量的使用put命令进行文件的上传,导致hdfs拥堵,所以下载不了文件,kill到put进程,就恢复正常了。
$hdfs dfs -get /test/part-r-00000.gz ./
15/09/15 09:20:33 INFO hdfs.DFSClient: Access token was invalid when connecting to /192.168.2.42:50010 : org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error for OP_READ_BLOCK, self=/192.168.2.12:51440, remote=/192.168.2.42:50010, for file /test/part-r-00000.gz, for pool BP-9392391-192.168.2.101-1404293177278 block 1121463064_48718312
15/09/15 09:20:33 WARN hdfs.DFSClient: Failed to connect to /192.168.2.42:50010 for block, add to deadNodes and continue. org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error for OP_READ_BLOCK, self=/192.168.2.12:51441, remote=/192.168.2.42:50010, for file /test/part-r-00000.gz, for pool BP-9392391-192.168.2.101-1404293177278 block 1121463064_48718312
org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error for OP_READ_BLOCK, self=/192.168.2.12:51441, remote=/192.168.2.42:50010, for file /test/part-r-00000.gz, for pool BP-9392391-192.168.2.101-1404293177278 block 1121463064_48718312
at org.apache.hadoop.hdfs.RemoteBlockReader2.checkSuccess(RemoteBlockReader2.java:425)
at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:397)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:786)
at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:665)
at org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:325)
获取不到Block文件,后来发现有人大量的使用put命令进行文件的上传,导致hdfs拥堵,所以下载不了文件,kill到put进程,就恢复正常了。
0 0
- hadoop群集get文件的时候出现报错
- 【Hadoop】12、运行hadoop的时候报错
- hadoop群集出现crontab job不执行的情况
- 运行的时候布局文件报错
- 运行hadoop wordcount 时候报错解决办法
- 使用DDMS往SDCard Push文件的时候报错
- 使用DDMS往SDCard Push文件的时候报错
- fck在上传文件的时候 FCKeditor 报错
- mysql导入sql文件的时候报错Segmentation fault
- django批量上传文件的时候报错MultiValueDictKeyError
- flume保存文件到hdfs的时候报错
- 在解压tar.gz文件的时候报错
- Java 创建文件输出流的时候报错
- Hadoop上传文件时候的错误
- hadoop上传文件报错
- ubuntu下安装lua的时候出现 lua.c:80:31: fatal error: readline/readline.h: 没有那个文件或目录 这样的报错
- hadoop 启动的时候datanode报错 Problem connecting to server
- hadoop 启动的时候datanode报错 Problem connecting to server
- 很久没有启动datanode服务的节点,重新启动,出现的Block更新情况
- LeetCode *** 241. Different Ways to Add Parentheses
- 监控入门-Linux的平均负载(load average)
- 189. Rotate Array
- 数据结构与算法——散列表类的C++实现(探测散列表)
- hadoop群集get文件的时候出现报错
- Linux下Shell编程实现基于Hadoop的ETL(导出篇)
- hadoop群集出现crontab job不执行的情况
- Struts 2的struts.xml中Return json类型配置详解
- ambari-server启动报错 mysqladmin flush-hosts
- ambari安装storm后,所有supervisor无法正常启动
- ambari安装Namenode HA
- 一台linux同时安装两个mysql库,使用不同端口
- supervisor无法正常运行Caused by: java.io.EOFException: null