org.apache.hadoop.hdfs.server.namenode.SafeModeException

来源:互联网 发布:矩阵纵横 编辑:程序博客网 时间:2024/05/16 04:47

异常描述:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /aa/bb. Name node is in safe mode.Resources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE:  If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1207)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:3590)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3566)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:754)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:558)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)at org.apache.hadoop.ipc.Client.call(Client.java:1410)at org.apache.hadoop.ipc.Client.call(Client.java:1363)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)at java.lang.reflect.Method.invoke(Unknown Source)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)at com.sun.proxy.$Proxy11.mkdirs(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:500)at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2553)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2524)at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:827)at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:823)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:823)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:816)at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1815)at cn.itheima.bigdata.hadoop.hdfs.HdfsClientEasy.testMkdir(HdfsClientEasy.java:74)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)at java.lang.reflect.Method.invoke(Unknown Source)at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:45)at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:42)at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263)at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:68)at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:47)at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231)at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60)at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229)at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50)at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222)at org.junit.runners.ParentRunner.run(ParentRunner.java:300)at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)


原因是虚拟机或者操作系统空间不足:$ df  -hl 查看空间使用情况




解决办法:

1.清空系统垃圾文件:mv -rf  <filedir>  清空filedir目录及以下所有文件


2.hadoop强制退出安全模式:$ hdfs dfsadmin -safemode leave





阅读全文
0 0
原创粉丝点击