ambari中spark thrift server 挂掉

来源:互联网 发布:mac brew mongodb 编辑:程序博客网 时间:2024/06/05 14:38
17/08/24 01:14:21 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[172.25.148.27:50010, 172.25.148.22:50010], original=[172.25.148.27:50010, 172.25.148.22:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548)
17/08/24 01:14:21 INFO HiveServer2: Shutting down HiveServer2
17/08/24 01:14:21 INFO ThriftCLIService: Thrift server has stopped
17/08/24 01:14:21 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
17/08/24 01:14:21 INFO AbstractService: Service:OperationManager is stopped.
17/08/24 01:14:21 INFO AbstractService: Service:SessionManager is stopped.
17/08/24 01:14:21 INFO ServerConnector: Stopped ServerConnector@49ad9f73{HTTP/1.1}{0.0.0.0:4040}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2b52b69c{/stages/stage/kill,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7bbecc3c{/api,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@53dda21f{/,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5679c2b3{/static,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c26c98a{/executors/threadDump/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1b8fc658{/executors/threadDump,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5a3c0970{/executors/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2c4c9e88{/executors,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@55e6eedf{/environment/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@fda8f12{/environment,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@42d0a46b{/storage/rdd/json,null,UNAVAILABLE}

17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@6d3efee5{/storage/rdd,null,UNAVAILABLE}


当前集群中一共有三个datanode,备份数量设置的是3。

在写操作时,它会在pipeline中写3个机器。默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的datanode大于等于3,它会找另外一个datanode来拷贝。目前机器只有3台,因此只要一台datanode出问题,就一直无法写入成功。未解决此问题需做如下修改

修改hdfs-site.xml文件,添加或者修改如下两项:% e0 W3 B3 @/ A
<property>
  <name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
  <value>true</value>
</property>
<property>+ D, p9 Y+ q* c. v' m& x, U
  <name>dfs.client.block.write.replace-datanode-on-failure.policy</name>9 ^) }9 |( Z3 B9 f1 ?
  <value>NEVER</value>
</property>/ y( k9 

未解决此问题

# |7

原创粉丝点击