运行spark问题:Initial job has not accepted any resources; check your cluster UI to ensure that workers a

来源:互联网 发布:python kmeans 编辑:程序博客网 时间:2024/06/05 20:59

运行  spark-submit --master spark://master:7077 --executor-memory 3000g --py-files SparkUtil.py Spark_ModelMatch_eigen.py

出现如下警告,spark无法运行一直处于等待状态

RN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/08/30 20:56:41 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/08/30 20:56:56 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
17/08/30 20:57:11 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

从警告信息翻译如下:
初始化job时没有获取到任何资源;提示检查集群,确保workers可以被注册并有足够的内存资源。 

集群应该没问题,之前运行过一个程序,

改成 30G 就行  原来--executor-memory 3000g 是每个执行器运行的内存数 我理解成总运行的内存数

spark-submit --master spark://master:7077 --executor-memory 30g --py-files SparkUtil.py Spark_ModelMatch_eigen.py

该问题解决

 (TID 458, 172.17.0.35): java.io.IOException: Cannot run program "/home/hadoop/anaconda2/bin/python": error=2, No such file or directory Cannot run program "/home/hadoop/anaconda2/bin/python": error=2, No such file or directory
这个文件明明是存在的,在终端运行是能打开python。 是35这台集群上没有/anaconda2/bin/python  





阅读全文
0 0
原创粉丝点击