[Mark] Spark+Python 初学时遇到的问题

来源:互联网 发布:淘宝男士韩版鞋 编辑:程序博客网 时间:2024/05/29 09:03

i本人初学Spark,记录点滴收获,自己mark一下,也希望对各位有帮助。


平台及版本:

ubuntu12.04 LTS /  python-2.7.3 / hadoop-2.7.1 / spark-1.6.0-bin-without-hadoop


基本安装配置不做介绍,推荐参考厦门大学 林子雨 老师的网上教程(非常详细)。


运行《Spark 机器学习》教材中的1.6节代码出错:

"""文件名为 pythonapp.py"""
from pyspark import SparkContext

sc = SparkContext("local[2]", "First Spark App")
# we take the raw data in CSV format and convert it into a set of records of the form (user, product, price)
data = sc.textFile("data/UserPurchaseHistory.csv").map(lambda line: line.split(",")).map(lambda record: (record[0], record[1], record[2]))
# let's count the number of purchases
numPurchases = data.count()
# let's count how many unique users made purchases
uniqueUsers = data.map(lambda record: record[0]).distinct().count()
# let's sum up our total revenue
totalRevenue = data.map(lambda record: float(record[2])).sum()
# let's find our most popular product
products = data.map(lambda record: (record[1], 1.0)).reduceByKey(lambda a, b: a + b).collect()
mostPopular = sorted(products, key=lambda x: x[1], reverse=True)[0]

# Finally, print everything out
print "Total purchases: %d" % numPurchases
print "Unique users: %d" % uniqueUsers
print "Total revenue: %2.2f" % totalRevenue
print "Most popular product: %s with %d purchases" % (mostPopular[0], mostPopular[1])

# stop the SparkContext
sc.stop()

-----------------------------------------------------报错显示-------------------------------------

Traceback (most recent call last):
  File "/usr/local/spark/bin/pythonapp.py", line 8, in <module>
    numPurchases = data.count()
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 1004, in count
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 995, in sum
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 869, in fold
  File "/usr/local/spark/python/lib/pyspark.zip/pyspark/rdd.py", line 771, in collect
  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 813, in __call__
  File "/usr/local/spark/python/lib/py4j-0.9-src.zip/py4j/protocol.py", line 308, in get_return_value
py4j.protocol.Py4JJavaError


-------------------------------------------------------解答----------------------------------------------

启动 hadoop,在hdfs中创建一个路径:/usr/hadoop/input/

并将"pythonapp.py"中,data 的路径改为:"/usr/hadoop/input/UserPurchaseHistory.csv"

在spark/bin目录下,运行pythonapp.py


解决问题,得到预期结果。





0 0
原创粉丝点击