Spark 基础函数

来源:互联网 发布:网络机柜的尺寸 编辑:程序博客网 时间:2024/04/28 03:16

创建RDD

根据内容parallelize ():

Line = sc.parallelize(["pandas", "Ilike pandas"])

根据文本文件 textFile():

inputRDD = sc.textFile("log.txt")

映射RDD

一对一映射 map(): 将每个数据项变换后形成新的数据项作为结果

num = sc.parallelize([1,2,3,4])squared = nums.map(lambda x: x * x).collect()for num in squared:print("%i" % (num))
一对多映射 flatMap():将每个数据项变换后形成多个数据项,然后把所有数据项作为结果

num = sc.parallelize(["1 2","2 3 4"])words = nums.map(lambda x: x.split(" ")).collect()for word in words:print(word)

过滤RDD filler():

inputRDD = sc.textFile("log.txt")errorsRDD = inputRDD.filter(lambda x:"error" in x)

合并RDD union():

inputRDD = sc.textFile("log.txt")errorsRDD = inputRDD.filter(lambda x:"error" in x)warningRDD = inputRDD.filter(lambda x:"warning" in x)badLinesRDD = errorsRDD.union(warningRDD)

计算RDD长度 Count()

print( "Inpur had " + badLinesRDD.count() + “ concerning lines”)
读取RDD内容到本地

部分读取 take(Length) 从RDD中读取length条记录到本地

for line in badLinesRDD.take(10):print(line)

全部读取 collect()

for line in badLinesRDD.collect():print(line)

aggregate() 复杂的reduce

from pyspark import SparkContext, SparkConfsc = SparkContext("local[2]", "Simple App")nums = sc.parallelize([1, 2, 3, 4, 5])sumCount = nums.aggregate((0, 0),                          (lambda acc, value: (acc[0]+value, acc[1]+1)),                          (lambda acc1, acc2: (acc1[0]+acc2[0], acc1[1]+acc2[1])))print(sumCount[0]/float(sumCount[1]))





0 0
原创粉丝点击