Apache Spark 1.6.1 学习教程 - 回顾Titanic Data
来源:互联网 发布:笨办法学python第五版 编辑:程序博客网 时间:2024/04/25 08:26
这篇博客主要是利用Titanic dataset来简单演示pyspark 1.6.1的使用方法。
这组数据比较小,训练数据只有891行,训练、测试数据可以在这里下载(train.csv, test.csv)。
内容
- 数据加载和转化
- 数据清理
- 特征提取
- 套用ml/mllib算法
1. 数据加载和转化
a. 数据加载
当我们运行pyspark之后,SparkContect (sc)就同时运行了。
我们利用sc.textFile读取csv文件,生成的数据格式为RDD。
与此同时,我们也可以使用sqlContext.read.text读取csv文件,但是生成数据格式为DataFrame。
train_path='/Users/chaoranliu/Desktop/github/kaggle/titanic/train.csv'test_path='/Users/chaoranliu/Desktop/github/kaggle/titanic/test.csv'# Load csv file as RDDtrain_rdd = sc.textFile(train_path)test_rdd = sc.textFile(test_path)
让我们看看前3行RDD数据:
train_rdd.take(3)
数据的结构是python list, 每一行是一个string。
[u'PassengerId,Survived,Pclass,Name,Sex,Age,SibSp,Parch,Ticket,Fare,Cabin,Embarked', u'1,0,3,"Braund, Mr. Owen Harris",male,22,1,0,A/5 21171,7.25,,S', u'2,1,1,"Cumings, Mrs. John Bradley (Florence Briggs Thayer)",female,38,1,0,PC 17599,71.2833,C85,C']
b. 转化RDD为DataFrame
Spark DataFrame 是从R data frame 和 python pandas DataFrame 得到的灵感,它是Spark 新的数据格式,在以后版本会取代RDD。它的语法与RDD不同,会更加接近R和pandas. 这里我会把RDD转化为DataFrame,以便后面的数据处理。
步骤:
- 去掉数据标题(第一行)
- 用逗号分割每行数据 并 转化为tuple
- 用数据标题命名数据列
# Parse RDD to DFdef parseTrain(rdd): # extract data header (first row) header = rdd.first() # remove header body = rdd.filter(lambda r: r!=header)def parseRow(row): # a function to parse each text row into # data format # remove double quote, split the text row by comma row_list = row.replace('"','').split(",") # convert python list to tuple, which is # compatible with pyspark data structure row_tuple = tuple(row_list) return row_tuple rdd_parsed = body.map(parseRow) colnames = header.split(",") colnames.insert(3,'FirstName') return rdd_parsed.toDF(colnames)## Parse Test RDD to DF def parseTest(rdd): header = rdd.first() body = rdd.filter(lambda r: r!=header) def parseRow(row): row_list = row.replace('"','').split(",") row_tuple = tuple(row_list) return row_tuple rdd_parsed = body.map(parseRow) colnames = header.split(",") colnames.insert(2,'FirstName') return rdd_parsed.toDF(colnames)train_df = parseTrain(train_rdd)test_df = parseTest(test_rdd)
现在让我们看看DataFrame的格式:
train_df.show(3)
+———–+——–+——+———+——————–+——+—+—–+—–+—————-+——-+—–+——–+ |PassengerId|Survived|Pclass|FirstName| Name| Sex|Age|SibSp|Parch| Ticket| Fare|Cabin|Embarked| +———–+——–+——+———+——————–+——+—+—–+—–+—————-+——-+—–+——–+ | 1| 0| 3| Braund| Mr. Owen Harris| male| 22| 1| 0| A/5 21171| 7.25| | S| | 2| 1| 1| Cumings| Mrs. John Bradle…|female| 38| 1| 0| PC 17599|71.2833| C85| C| | 3| 1| 3|Heikkinen| Miss. Laina|female| 26| 0| 0|STON/O2. 3101282| 7.925| | S| +———–+——–+——+———+——————–+——+—+—–+—–+—————-+——-+—–+——–+
c.合并训练和测试数据
合并训练和测试数据,方便后便的数据清理 和 特征提取。
## Add Survived column to testfrom pyspark.sql.functions import lit, coltrain_df = train_df.withColumn('Mark',lit('train'))test_df = (test_df.withColumn('Survived',lit(0)) .withColumn('Mark',lit('test')))test_df = test_df[train_df.columns]## Append Test data to Train datadf = train_df.unionAll(test_df)
2.数据清理
a. 转化 Age, SibSp, Parch, Fare 为数值数据
df = (df.withColumn('Age',df['Age'].cast("double")) .withColumn('SibSp',df['SibSp'].cast("double")) .withColumn('Parch',df['Parch'].cast("double")) .withColumn('Fare',df['Fare'].cast("double")) .withColumn('Survived',df['Survived'].cast("double")) )df.printSchema()
可以看到 Age, SibSp, Parch, Fare 四个变量已经转变为数值数据了:
root |-- PassengerId: string (nullable = true) |-- Survived: double (nullable = true) |-- Pclass: string (nullable = true) |-- FirstName: string (nullable = true) |-- Name: string (nullable = true) |-- Sex: string (nullable = true) |-- Age: double (nullable = true) |-- SibSp: double (nullable = true) |-- Parch: double (nullable = true) |-- Ticket: string (nullable = true) |-- Fare: double (nullable = true) |-- Cabin: string (nullable = true) |-- Embarked: string (nullable = true) |-- Mark: string (nullable = false)
b. 用平均数填充丢失数据
Age, Fare 有 263, 1 个缺失数据,这里我简单地用平均值用填充。
numVars = ['Survived','Age','SibSp','Parch','Fare']def countNull(df,var): return df.where(df[var].isNull()).count()missing = {var: countNull(df,var) for var in numVars}age_mean = df.groupBy().mean('Age').first()[0]fare_mean = df.groupBy().mean('Fare').first()[0]df = df.na.fill({'Age':age_mean,'Fare':fare_mean})
各个数据的缺失情况:
{'Age': 263, 'Fare': 1, 'Parch': 0, 'SibSp': 0, 'Survived': 0}
3. 特征提取
a. 从Name中提取Title
这里的主要思想是创建一个 user-defined-function (udf) 应用在Name列,来抓取Title。
from pyspark.sql.functions import udffrom pyspark.sql.types import StringType## created user defined function to extract titlegetTitle = udf(lambda name: name.split('.')[0].strip(),StringType())df = df.withColumn('Title', getTitle(df['Name']))df.select('Name','Title').show(3)
数据df多一列Title:
+--------------------+-----+| Name|Title|+--------------------+-----+| Mr. Owen Harris| Mr|| Mrs. John Bradle...| Mrs|| Miss. Laina| Miss|+--------------------+-----+only showing top 3 rows
b. 引索类别变量
类别变量通常需要转化数值变量才可以套用一些机器学习的算法。这里我只是简单地利用引索来实现这个功能。例如这样的映射Sex - male => 0, Sex - female =>1。但是这种方法也有它的不足,因为在无形中引进的人为的变量之间数值关联。One-hot-encoding方法可以避免这个不足,但是会大幅增加数据维度(特征数量)
catVars = ['Pclass','Sex','Embarked','Title']## index Sex variablefrom pyspark.ml.feature import StringIndexersi = StringIndexer(inputCol = 'Sex', outputCol = 'Sex_indexed')df_indexed = si.fit(df).transform(df).drop('Sex').withColumnRenamed('Sex_indexed','Sex')## make use of pipeline to index all categorical variablesdef indexer(df,col): si = StringIndexer(inputCol = col, outputCol = col+'_indexed').fit(df) return siindexers = [indexer(df,col) for col in catVars]from pyspark.ml import Pipelinepipeline = Pipeline(stages = indexers)df_indexed = pipeline.fit(df).transform(df)df_indexed.select('Embarked','Embarked_indexed').show(3)
在生成的数据里,Embarked 被映射为 S=>0, C=>1, Q=>2:
+--------+----------------+|Embarked|Embarked_indexed|+--------+----------------+| S| 0.0|| C| 1.0|| S| 0.0|+--------+----------------+only showing top 3 rows
c. 数据数据格式为 label/features
为了使用ml/mllib算法包,我们需要把特征转变为一个Vector.
catVarsIndexed = [i+'_indexed' for i in catVars]featuresCol = numVars+catVarsIndexedfeaturesCol.remove('Survived')labelCol = ['Mark','Survived']from pyspark.sql import Rowfrom pyspark.mllib.linalg import DenseVectorrow = Row('mark','label','features')df_indexed = df_indexed[labelCol+featuresCol]# 0-mark, 1-label, 2-features# map features to DenseVectorlf = (df_indexed.map(lambda r: (row(r[0],r[1],DenseVector(r[2:])))) .toDF())# index label# convert numeric label to categorical, which is required by# decisionTree and randomForestlf = (StringIndexer(inputCol = 'label',outputCol='index') .fit(lf) .transform(lf))lf.show(3)
+-----+-----+--------------------+-----+| mark|label| features|index|+-----+-----+--------------------+-----+|train| 0.0|[22.0,1.0,0.0,7.2...| 0.0||train| 1.0|[38.0,1.0,0.0,71....| 1.0||train| 1.0|[26.0,0.0,0.0,7.9...| 1.0|+-----+-----+--------------------+-----+only showing top 3 rows
c. 重新分割训练,验证,测试数据
train = lf.where(lf.mark =='train')test = lf.where(lf.mark =='test')# random split further to get train/validatetrain,validate = train.randomSplit([0.7,0.3],seed =121)print 'Train Data Number of Row: '+ str(train.count())print 'Validate Data Number of Row: '+ str(validate.count())print 'Test Data Number of Row: '+ str(test.count())
Train Data Number of Row: 636Validate Data Number of Row: 255Test Data Number of Row: 418
4. 使用ml/mllib算法模型
ml对应的数据格式是DataFrame,而mllib对应的数据格式是RDD。
接下来,我会用逻辑回归,决策树,随机森林来做拟合,并观察它们的模型表现。
逻辑回归
from pyspark.ml.classification import LogisticRegression# regPara: lasso regularisation parameter (L1)lr = LogisticRegression(maxIter = 100, regParam = 0.05, labelCol='index').fit(train)# Evaluate model based on auc ROC(default for binary classification)from pyspark.ml.evaluation import BinaryClassificationEvaluatordef testModel(model, validate = validate): pred = model.transform(validate) evaluator = BinaryClassificationEvaluator(labelCol = 'index') return evaluator.evaluate(prod)print 'AUC ROC of Logistic Regression model is: '+str(testModel(lr))
AUC ROC of Logistic Regression model is: 0.836952368823
逻辑回归模型的 ROC 0.837 ,接下来我们会与决策树和随机森林作比较。
决策树和随机森林
from pyspark.ml.classification import DecisionTreeClassifier, RandomForestClassifierdt = DecisionTreeClassifier(maxDepth = 3, labelCol ='index').fit(train)rf = RandomForestClassifier(numTrees = 100, labelCol = 'index').fit(train)models = {'LogisticRegression':lr, 'DecistionTree':dt, 'RandomForest':rf}modelPerf = {k:testModel(v) for k,v in models.iteritems()}print modelPerf
{'DecistionTree': 0.7700267447784003, 'LogisticRegression': 0.8369523688232298, 'RandomForest': 0.8597809475292919}
在没有模型调测的情况下,随机森林看上去有更好的预测效果。
完整的python代码可以在 这里找到
- Apache Spark 1.6.1 学习教程 - 回顾Titanic Data
- kaggle实例学习-Titanic(1)
- Apache Beam+Spark教程
- apache spark单机安装教程
- Apache Spark 学习笔记(1)
- Apache Spark新手入门学习
- Apache Spark学习(二)
- Core Data 学习教程
- Titanic问题学习
- Apache Spark(学习资料二)
- BerkeleyX CS100.1x"Introduction to Big Data with Apache Spark"环境搭建
- 机器学习案例1---A journey through Titanic
- Apache Spark数据分析教程(二):Spark SQL
- Apache Spark数据分析教程(二):Spark SQL
- Apache Spark数据分析教程(二):Spark SQL
- Is Apache Spark the Next Big Thing in Big Data?
- titanic survival 1
- Titanic
- C++11中weak_ptr的使用
- LINQ
- 一个由于锁的作用域导致core dump的问题的解决
- mysql--通过cmd连接mysql,并创建数据库
- 在 Ubuntu 16.04 中安装谷歌 Chrome 浏览器
- Apache Spark 1.6.1 学习教程 - 回顾Titanic Data
- 同步通信与异步通信
- 一次编程比赛的设计
- 打印乘法口诀表
- linux命令详解:finger
- storm消息分发策略
- 素数打表
- Docker入门
- 8.14学习笔记