hdfs报错之 DisallowedDatanodeException
来源:互联网 发布:淘宝客服怎么做流程 编辑:程序博客网 时间:2024/06/06 02:53
先看异常堆栈,以前一直没有去看namenode的日志 今天已查看 发现一个错误 看错误的信息应该是无法解析到207机器
2017-10-10 09:45:45,124 WARN blockmanagement.DatanodeManager (DatanodeManager.java:registerDatanode(882)) - Unresolved datanode registration: hostname cannot be resolved (ip=192.168.5.208, hostname=192.168.5.208)2017-10-10 09:45:45,124 INFO ipc.Server (Server.java:logException(2361)) - IPC Server handler 106 on 8020, call org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol.registerDatanode from 192.168.5.208:44010 Call#139663 Retry#0org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode denied communication with namenode because hostname cannot be resolved (ip=192.168.5.208, hostname=192.168.5.208): DatanodeRegistration(0.0.0.0:50010, datanodeUuid=ebb1de8a-6f91-4188-9f19-1240f7ebb97a, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-f2892450-2920-4136-a4de-dbdbcb5ba329;nsid=350005801;c=0)at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.registerDatanode(DatanodeManager.java:883)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:4616)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.registerDatanode(NameNodeRpcServer.java:1391)at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolServerSideTranslatorPB.registerDatanode(DatanodeProtocolServerSideTranslatorPB.java:100)at org.apache.hadoop.hdfs.protocol.proto.DatanodeProtocolProtos$DatanodeProtocolService$2.callBlockingMethod(DatanodeProtocolProtos.java:29062)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2273)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2269)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2267)
翻翻报错的源码
public void registerDatanode(DatanodeRegistration nodeReg) throws DisallowedDatanodeException, UnresolvedTopologyException { InetAddress dnAddress = Server.getRemoteIp(); if (dnAddress != null) { // Mostly called inside an RPC, update ip and peer hostname String hostname = dnAddress.getHostName(); String ip = dnAddress.getHostAddress(); if (checkIpHostnameInRegistration && !isNameResolved(dnAddress)) { // Reject registration of unresolved datanode to prevent performance // impact of repetitive DNS lookups later. final String message = "hostname cannot be resolved (ip=" + ip + ", hostname=" + hostname + ")"; LOG.warn("Unresolved datanode registration: " + message); throw new DisallowedDatanodeException(nodeReg, message); } // update node registration with the ip and hostname from rpc request nodeReg.setIpAddr(ip); nodeReg.setPeerHostName(hostname); }
从上面那个代码可以看到 错误checkIpHostnameInRegistration && !isNameResolved(dnAddress)这个判断失败进入了异常 分支
checkIpHostnameInRegistration 这个是一个配置项 通过配置dfs.namenode.datanode.registration.ip-hostname-check这个可以修改true或者false
那看一下isNameResolved这个源码
/** * Checks if name resolution was successful for the given address. If IP * address and host name are the same, then it means name resolution has * failed. As a special case, local addresses are also considered * acceptable. This is particularly important on Windows, where 127.0.0.1 does * not resolve to "localhost". * * @param address InetAddress to check * @return boolean true if name resolution successful or address is local */ private static boolean isNameResolved(InetAddress address) { String hostname = address.getHostName(); String ip = address.getHostAddress(); return !hostname.equals(ip) || NetUtils.isLocalAddress(address); }通过方法的方头注释 可以了解到 当 hostname和ip是一样的时候 返回值是false
因此进入了分支 所以我们明白了 是我们配置hdfs的时候 使用了ip代替hostname导致的错误
但是我们无法修改 所以我暂时只是修改了配置项dfs.namenode.datanode.registration.ip-hostname-check 让其不检查
阅读全文
0 0
- hdfs报错之 DisallowedDatanodeException
- HDFS格式化报错
- HDFS格式化报错
- HDFS报错
- hdfs namenode -format报错
- HDFS HTTP访问报错
- hdfs namenode -format 报错
- Hadoop格式化HDFS报错
- HDFS报错:Connection refused!
- HDFS下载文件报错!
- HDFS下载报错NullPointerException
- sqoop2写入hdfs报错
- 重启HDFS报错
- Hadoop 执行 hdfs namenode -format报错
- eclipse远程上传hdfs文件报错
- hadoop 上传文件到HDFS报错
- flume写入hadoop hdfs报错 Too many open files
- hadoop 2.5.2执行bin/hdfs namenode -format报错
- coding小记:np.random.randn与tf.random_normal
- Update your urlpatterns to be a list of django.conf.urls.url() instances instead. Django 1.10. Updat
- HTTP/FTP压力测试工具siege
- 23、JS调用Android原生代码方法
- Linux单元小结(5)
- hdfs报错之 DisallowedDatanodeException
- 大数据学习20:Maven 理解 和 spark、hadoop、hive编译
- Direct3D中的纹理映射
- 阅读学习的第一个python程序
- C++11 一致性初始化与初值列
- 互联网模式怎么赚钱?
- ios 点击穿透以及延迟300ms解决方法
- 2PC
- powerdesigner 生成JAVA类