第4讲:Scala模式匹配和类型系统

来源:互联网 发布:营造清朗的网络空间 编辑:程序博客网 时间:2024/05/16 14:08

主要要点:
Scala模式匹配彻底详解
Scala类型系统彻底详解
Spark源码阅读及作业
一.Scala模式匹配彻底详解
Scala中的模式匹配类似于java中的switch case ,但是switch case是对值进行匹配,操作的对象也是值。
Scala:1.对值可以进行匹配;2.对类型进行匹配;3.对集合: map,list里面的元素进行匹配。
1.1 值匹配

scala> def bigData(data:String){  //data为匹配对应值     | data match{  //类似Java中的switch     | case "Spark" => println("Wow...")     | case "Hadoop" => println("Ok")     | case _ => println("Something others")           // _ 表示不满足上述的所有情况     | }     | }  bigData: (data: String)Unit    //这时候返回的值的类型是Unit,因为println的返回类型是Unitscala> bigData("Hadoop")    //匹配之后程序就结束了,不会往下执行的  Ok case _ if data == "Flink" => println("Cools")    //可以在case后面加入判断语句,加上守卫,双重判断 case data_ if data_ == "Flink" => println("Cools")    //可以将变量赋值,设置一个变量 data_ 在匹配的时候将data的值赋值给data_,然后再判断scala> bigData("Flink")  Cools

1.2 类型匹配

scala> import java.io._  import java.io._scala> :paste// Entering paste mode (ctrl-D to finish)def exception(e: Exception){ //Exception 为匹配对应类型 e match{    case fileException:FileNotFoundException => println("File not found :"+ fileException)    case _: Exception => println("Exception getting thread dump from executor $executorId",e)}}// Exiting paste mode, now interpreting.exception: (e: Exception)Unitscala> exception(new FileNotFoundException("OPP!!!"))    File not found :java.io.FileNotFoundException: OPP!!!

1.3 集合匹配

scala> def data(array:Array[String]){     | array match{     | case Array("Scala") => println("Scala")     //以指定元素,进行匹配     | case Array(spark,hadoop,flink) => println(spark + "+" + hadoop + "+" + flink)     //指定元素的个数,不需要指定元素的类型     | case Array("Spark",_*) => println("Spark....")     //以指定的元素开头。_*代表其他不定个数的任何元素     | case _ => println("Unknown")     | }     | }data: (array: Array[String])Unitscala> data(Array("Scala"))//输入Scala单元素数组Scalascala> data(Array("Spark","Hadoop","Flink"))//输入任意三个元素Spark+Hadoop+Flinkscala> data(Array("Spark","Scala","Kafka"))//同时符合两个case时,优先执行在前的caseSpark + Scala + Kafkascala> data(Array("Spark","Scala"))Spark....

1.4 case class 匹配(样例类)
用于消息的封装,并发编程的消息通信。
scala> case class Person(name:String)
defined class Person
只定义属性,由scala的编译器自动编译的时候提供getter和setter方法,会生成case class伴生对象case Object,class Person背后会有 Object Person里面会有编译器会自动为你生成apply方法,Person(“Spark”)参数里面的内容”Spark”会传递给apply作为参数,apply接收到这个具体内容之后就会为我们创建实际的case class对象。

//把参数传入Person类中,编译器会调用apply方法,会返回case class Person的实例scala> case class Person(name:String)defined class Personscala> Person("Spark")res8: Person = Person(Spark)scala> class Persondefined class Person//在主构造器,传入的参数不需要val定义,默认只读成员,默认的会加入val。scala> case class Worker(name:String,salary:Double) extends Persondefined class Workerscala> case class Student(name:String,score:Double) extends Persondefined class Studentscala> def sayHi(person : Person){     | person match{     //直接接收变量的参数,然后右边是具体操作     | case Worker(name,score) => println("I am a worker :" + name + score)     | case Student(name,salary) => println("I am a student :" + name + salary)     | case _ => println("Unknown")     | }     | }sayHi: (person: Person)Unitscala> sayHi(Worker("Spark",6.5))I am worker :Spark6.5scala> sayHi(Student("Spark",6.6))I am a student :Spark6.6

二.类型参数—泛型类和泛型函数

// class Person[T]泛型类T类型的scala> class Person[T](val content : T){     | def getContent(id : T) = id + "_" + content     | }defined class Person//这里面指定(content)类型为String,那么后面传入的参数一定要是String类型的。scala> val p = new Person[String]("Spark")p: Person[String] = Person@15e8f9b2//传入的(id)参数一定要是String类型的,因为前面指定了类型。scala> p.getContent("Scala")res11: String = Scala_Spark//参数不是String所以报错scala> p.getContent(100)<console>:13: error: type mismatch; found   : Int(100) required: String              p.getContent(100)

三.上边界和下边界
例如,某公司要招聘大数据工程师,大数据工程师本身是一个泛型,它本身包含了很多,但是如果你要限定它的类型,这时候你就需要边界,例如说,这个工程师必须要掌握Spark,可能除了Spark技术之外还需要掌握其他的,这个就是子类的事了,这就是边界,很多时候对类型也要限定边界,如果我们指定了类型的上边界。那么所有的类型必须是上边界的类型或者是其子类型,这个时候我们就确认在内部方法调用的时候,一定有父类的某种方法,例如,Spark工程师,它一定会Spark,至于其他功能就是子类的事了。
上边界: <:
_ 代表的一定是CompressionCodec的类型,或者是其子类型,这就确保了CompressionCodec里面有啥方法,子类型一定可以调用。

def saveAsTextFile(path: String, codec: Class[_ <: CompressionCodec])

下边界: >: 指定了泛型类型必须是某个类型的父类,或者说是这个类的本身。

四.View Bounds—视图界定
语法:<% 对类型进行隐式转换implicit

scala> class Compare[T : Ordering](val n1:T , val n2 : T){     | def bigger(implicit ordered : Ordering[T]) = if(ordered.compare(n1,n2) > 0) n1 else n2     | }defined class Comparescala> new Compare[Int](8,3).biggerres14: Int = 8scala> new Compare[String]("Spark","Hadoop").biggerres15: String = Spark //S 排在 H 后面scala> Ordering[String]res16: scala.math.Ordering[String] = scala.math.Ordering$String$@c262f2fscala> Ordering[Int]res17: scala.math.Ordering[Int] = scala.math.Ordering$Int$@1bb96449

T:ClassTag

 *   scala> def mkArray[T : ClassTag](elems: T*) = Array[T](elems: _*) *   mkArray: [T](elems: T*)(implicit evidence$1: scala.reflect.ClassTag[T])Array[T] * *   scala> mkArray(42, 13) *   res0: Array[Int] = Array(42, 13) * *   scala> mkArray("Japan","Brazil","Germany") *   res1: Array[String] = Array(Japan, Brazil, Germany)

作业:阅读Spark源码RDD,HadoopRDD,SparkContext,Master,Worker的源码,并分析里面使用的所有的模式匹配和类型参数的内容

RDD源码阅读
Some是一个case class样例类,对类进行匹配,ReliableRDDCheckpointData[_]就相当于ReliableRDDCheckpointData[T]
case _ 其中 _ 表示任何,当前面都没有匹配成功的时候就会执行他。

checkpointData match {    case Some(_: ReliableRDDCheckpointData[_]) => logWarning(          "RDD was already marked for reliable checkpointing: overriding with local checkpoint.")    case _ =>      }

case直接对值进行匹配

case 0 => Seq.emptycase 1 =>val d = rdd.dependencies.headdebugString(d.rdd, prefix, d.isInstanceOf[ShuffleDependency[_, _, _]], true)case _ =>

case对指定参数进行匹配

case (desc: String, 0) => s"$partitionStr $desc"case (desc: String, _) => s"$nextPrefix $desc"

case对数组进行匹配

case Array(t) => tcase _ => throw new UnsupportedOperationException("empty collection")

泛型函数JavaRDD[T]函数的返回类型T

def toJavaRDD() : JavaRDD[T] = {  new JavaRDD(this)(elementClassTag)}implicit def rddToAsyncRDDActions[T: ClassTag](rdd: RDD[T]): AsyncRDDActions[T] = {    new AsyncRDDActions(rdd)  }

HadoopRDD源码阅读
case 对异常进行匹配

case eof: EOFException =>finished = truecase e: Exception =>if (!ShutdownHookManager.inShutdown()) {logWarning("Exception in RecordReader.close()", e)

SparkContext源码阅读
case可以对函数进行匹配

case NonFatal(e) =>logError("Error initializing SparkContext.", e)

case对匿名函数的匹配

val data = br.map { case (k, v) =>val bytes = v.getBytesassert(bytes.length == recordLength, "Byte array does not have correct length")

case对字符串进行匹配

 case "local" =>  "file:" + uri.getPath case _ =>

new ReliableCheckpointRDD[T]泛型类,泛型类T类型的

protected[spark] def checkpointFile[T: ClassTag](path: String): RDD[T] = withScope {    new ReliableCheckpointRDD[T](this, path)  }

泛型

def runJob[T, U: ClassTag](rdd: RDD[T],func: Iterator[T] => U,partitions: Seq[Int]): Array[U] = {val cleanedFunc = clean(func)runJob(rdd, (ctx: TaskContext, it: Iterator[T]) => cleanedFunc(it), partitions)

Master源码阅读
对case class和case object进行匹配

case RequestMasterState => {      context.reply(MasterStateResponse(        address.host, address.port, restServerBoundPort,        workers.toArray, apps.toArray, completedApps.toArray,        drivers.toArray, completedDrivers.toArray, state)) }case BoundPortsRequest => {      context.reply(BoundPortsResponse(address.port, webUi.boundPort, restServerBoundPort)) }case RequestExecutors(appId, requestedTotal) =>      context.reply(handleRequestExecutors(appId, requestedTotal))case KillExecutors(appId, executorIds) =>      val formattedExecutorIds = formatExecutorIds(executorIds)      context.reply(handleKillExecutors(appId, formattedExecutorIds))

Worker源码阅读
对特定参数进行匹配

case (executorId, _) => finishedExecutors.remove(executorId)case (driverId, _) => finishedDrivers.remove(driverId)

将值赋值给_result,然后执行后面操作

case pattern(_result) => _result.toBoolean
0 0
原创粉丝点击