ambari中spark thrift server 挂掉
来源:互联网 发布:mac brew mongodb 编辑:程序博客网 时间:2024/06/05 14:38
17/08/24 01:14:21 ERROR LiveListenerBus: Listener EventLoggingListener threw an exception
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[172.25.148.27:50010, 172.25.148.22:50010], original=[172.25.148.27:50010, 172.25.148.22:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548)
17/08/24 01:14:21 INFO HiveServer2: Shutting down HiveServer2
17/08/24 01:14:21 INFO ThriftCLIService: Thrift server has stopped
17/08/24 01:14:21 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
17/08/24 01:14:21 INFO AbstractService: Service:OperationManager is stopped.
17/08/24 01:14:21 INFO AbstractService: Service:SessionManager is stopped.
17/08/24 01:14:21 INFO ServerConnector: Stopped ServerConnector@49ad9f73{HTTP/1.1}{0.0.0.0:4040}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2b52b69c{/stages/stage/kill,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7bbecc3c{/api,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@53dda21f{/,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5679c2b3{/static,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c26c98a{/executors/threadDump/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1b8fc658{/executors/threadDump,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5a3c0970{/executors/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2c4c9e88{/executors,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@55e6eedf{/environment/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@fda8f12{/environment,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@42d0a46b{/storage/rdd/json,null,UNAVAILABLE}
java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[172.25.148.27:50010, 172.25.148.22:50010], original=[172.25.148.27:50010, 172.25.148.22:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:1040)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1106)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1253)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:1004)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:548)
17/08/24 01:14:21 INFO HiveServer2: Shutting down HiveServer2
17/08/24 01:14:21 INFO ThriftCLIService: Thrift server has stopped
17/08/24 01:14:21 INFO AbstractService: Service:ThriftBinaryCLIService is stopped.
17/08/24 01:14:21 INFO AbstractService: Service:OperationManager is stopped.
17/08/24 01:14:21 INFO AbstractService: Service:SessionManager is stopped.
17/08/24 01:14:21 INFO ServerConnector: Stopped ServerConnector@49ad9f73{HTTP/1.1}{0.0.0.0:4040}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2b52b69c{/stages/stage/kill,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7bbecc3c{/api,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@53dda21f{/,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5679c2b3{/static,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@7c26c98a{/executors/threadDump/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@1b8fc658{/executors/threadDump,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@5a3c0970{/executors/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@2c4c9e88{/executors,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@55e6eedf{/environment/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@fda8f12{/environment,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@42d0a46b{/storage/rdd/json,null,UNAVAILABLE}
17/08/24 01:14:21 INFO ContextHandler: Stopped o.s.j.s.ServletContextHandler@6d3efee5{/storage/rdd,null,UNAVAILABLE}
当前集群中一共有三个datanode,备份数量设置的是3。
在写操作时,它会在pipeline中写3个机器。默认replace-datanode-on-failure.policy是DEFAULT,如果系统中的datanode大于等于3,它会找另外一个datanode来拷贝。目前机器只有3台,因此只要一台datanode出问题,就一直无法写入成功。未解决此问题需做如下修改
修改hdfs-site.xml文件,添加或者修改如下两项:% e0 W3 B3 @/ A
<property>
<name>dfs.client.block.write.replace-datanode-on-failure.enable</name>
<value>true</value>
</property>
<property>+ D, p9 Y+ q* c. v' m& x, U
<name>dfs.client.block.write.replace-datanode-on-failure.policy</name>9 ^) }9 |( Z3 B9 f1 ?
<value>NEVER</value>
</property>/ y( k9
未解决此问题
# |7
阅读全文
0 0
- ambari中spark thrift server 挂掉
- 使用ambari启动Spark Thrift Server时报错:bad substitution
- CDH5.3.2中配置运行Spark SQL的Thrift Server
- CDH5.5.0中配置运行Spark SQL的Thrift Server
- Spark SQL thrift server 部署
- [Spark]Django项目使用Spark(thrift-server)
- Spark Job (thrift-server) 动态分配资源
- spark-thrift-server 执行spark-sql 的OOM GC异常.
- Spark SQL和Spark Thrift Server安装部署
- cloudera cdh 5.11 编译 启动spark thrift server spark sql
- spark SQL Running the Thrift JDBC/ODBC server
- spark SQL Running the Thrift JDBC/ODBC server
- [spark]解决beeline连接thrift-server加载数据权限问题
- spark sql thrift server搭建及踩过的坑
- spark thrift server kerberos 配置下show databases 报错
- 通过Thrift Server使用JDBC来运行Spark SQL
- ambari搭建spark
- Spark SQL读取Hive数据配置及使用Thrift JDBC/ODBC Server访问Spark SQL
- 五大常用算法基本介绍
- ANGEL:一个新型的分布式机器学习系统
- Linux(rad hat)基础知识
- 判断模拟器是否连网 没网进入连接中
- jquery mobile中的pageinit事件随笔
- ambari中spark thrift server 挂掉
- raspberryPi2 移植Android系统
- Eclipse添加注释模板
- python2.7剪刀石头布代码示例
- 索引初学
- 两种方式查看自己的Django版本
- java 基础知识点汇总
- 做技术的「五比一」原则
- js实现二分搜索的两种方法