Spark 常见问题解决方案

来源:互联网 发布:Python开发安卓 编辑:程序博客网 时间:2024/06/03 14:11

1. Could not find CoarseGrainedScheduler

异常详情如下:

17/12/23 00:50:09 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.org.apache.spark.SparkException: Could not find CoarseGrainedScheduler.    at org.apache.spark.rpc.netty.Dispatcher.postMessage(Dispatcher.scala:157)    at org.apache.spark.rpc.netty.Dispatcher.postOneWayMessage(Dispatcher.scala:137)    at org.apache.spark.rpc.netty.NettyRpcHandler.receive(NettyRpcEnv.scala:647)    at org.apache.spark.network.server.TransportRequestHandler.processOneWayMessage(TransportRequestHandler.java:178)    at org.apache.spark.network.server.TransportRequestHandler.handle(TransportRequestHandler.java:107)    at org.apache.spark.network.server.TransportChannelHandler.channelRead(TransportChannelHandler.java:118)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)    at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)    at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)    at org.apache.spark.network.util.TransportFrameDecoder.channelRead(TransportFrameDecoder.java:85)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)    at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:336)    at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1294)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:357)    at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:343)    at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:911)    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:643)    at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:566)    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:480)    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:442)    at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)    at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:144)    at java.lang.Thread.run(Thread.java:745)

分析解决方案:

1、这个可能是一个资源问题,应该给任务分配更多的 cores 和Executors,并且分配更多的内存。并且需要给RDD分配更多的分区
2、在配置资源中加入这句话也许能解决你的问题:–conf spark.dynamicAllocation.enabled=false


持续增加……




对机器学习和人工智能感兴趣,请扫码关注微信公众号!
这里写图片描述

原创粉丝点击