java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py" (in directory "

来源:互联网 发布:别哭妈妈 知乎 编辑:程序博客网 时间:2024/06/10 07:05

运行clouder yarn-client模式的spark抛异常:

</pre><p></p><p></p><pre name="code" class="java">16/09/02 17:16:32 WARN net.ScriptBasedMapping: Exception running /etc/hadoop/conf.cloudera.yarn/topology.py 10.55.45.251 java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py" (in directory "/root"): error=13, 权限不够at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)at org.apache.hadoop.util.Shell.runCommand(Shell.java:508)at org.apache.hadoop.util.Shell.run(Shell.java:478)at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:738)at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251)at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188)at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)at org.apache.hadoop.yarn.util.RackResolver.coreResolve(RackResolver.java:101)at org.apache.hadoop.yarn.util.RackResolver.resolve(RackResolver.java:81)at org.apache.spark.scheduler.cluster.YarnScheduler.getRackForHost(YarnScheduler.scala:38)at org.apache.spark.scheduler.TaskSetManager$$anonfun$org$apache$spark$scheduler$TaskSetManager$$addPendingTask$1.apply(TaskSetManager.scala:208)at org.apache.spark.scheduler.TaskSetManager$$anonfun$org$apache$spark$scheduler$TaskSetManager$$addPendingTask$1.apply(TaskSetManager.scala:187)at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)at org.apache.spark.scheduler.TaskSetManager.org$apache$spark$scheduler$TaskSetManager$$addPendingTask(TaskSetManager.scala:187)at org.apache.spark.scheduler.TaskSetManager$$anonfun$1.apply$mcVI$sp(TaskSetManager.scala:166)at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)at org.apache.spark.scheduler.TaskSetManager.<init>(TaskSetManager.scala:165)at org.apache.spark.scheduler.TaskSchedulerImpl.createTaskSetManager(TaskSchedulerImpl.scala:200)at org.apache.spark.scheduler.TaskSchedulerImpl.submitTasks(TaskSchedulerImpl.scala:164)at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:1052)at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:921)at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:861)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1607)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)Caused by: java.io.IOException: error=13, 权限不够at java.lang.UNIXProcess.forkAndExec(Native Method)at java.lang.UNIXProcess.<init>(UNIXProcess.java:186)at java.lang.ProcessImpl.start(ProcessImpl.java:130)at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)... 26 more

原因是提交命令没有添加--master参数。

如下命令运行正常:

sudo -u spark spark-submit \--class com.raysdata.etl.GPSLogClean \--master yarn-cluster \--executor-memory 1G \--total-executor-cores 10 \/tmp/raysdata-1.0-SNAPSHOT.jar \/user/optadmin/GPS/Position/Correct/2016/log \/user/optadmin/spark/gps/output







0 0
原创粉丝点击