第4集:Scala模式匹配、类型系统彻底精通与Spark源码阅读

来源:互联网 发布:js 事件对象 target 编辑:程序博客网 时间:2024/05/16 15:09

模式匹配和java中的switch  case 强大很大,除了值,类型,集合等进行匹配

最常见的Case class 进行匹配

Master.scala 有大量的模式匹配

模式匹配的使用编写简介明了,简洁等

Case “_” 不满足上面所有的情况的体验

def bigData(data:String)

{

    datamatch{

        case "Spark"=>println("Wow!!")

        case "hadoop"=> println("ok")

        case _=> println("other")

    }

}                                                //> bigData: (data: String)Unit

 

bigData("Spark")                                 //> Wow!!

 

可以在case里面加入条件判断

def bigData(data:String)

{

    datamatch{

        case "Spark"=>println("Wow!!")

        case "hadoop"=> println("ok")

        case _if data == "Flink" =>println("Flink")

        case _=>println("other")

    }

}                                                //> bigData: (data: String)Unit

 

bigData("Flink")                                 //> Flink

1case赋值

def bigData1(data:String)

{

    datamatch{

        case "Spark"=>println("Wow!!")

        case "hadoop"=> println("ok")

        case data_if data_ == "Flink"=> println("Flink:" + data_)

        case _=>println("other")

    }

}                                                //> bigData1: (data: String)Unit

 

bigData1("Flink")                                //> Flink:Flink

 

2:对类型进行匹配

def exception(e: Exception){

e match{

    casefileException:FileNotFoundException =>println("File not fount : " +fileException)

    case _:Exception=>println("Exception" ,e)

   

    }

}                                                //> exception: (e: Exception)Unit

 

exception(new FileNotFoundException("oop!!!"))   //> File notfount : java.io.FileNotFoundException: oop!!!

 

3:对集合进行匹配

def data (array:Array[String])

{   arraymatch{

    case Array("Scala")=>println("Scala")

    caseArray(spark,hadoop,flink)=>println(spark +" : " +hadoop +" : " +flink +" : ")

    case Array("Spark",_*)=>println("Spark...")

    case _=>println("Unkown")

    }

}                                                //> data: (array: Array[String])Unit

data(Array("Spark"))                             //> Spark...

data(Array("Scala"))                              //> Scala

data(Array("Scala","Spark","kafaka"))            //> Scala : Spark : kafaka :

 

4:对class进行匹配

scala> case class Person(name: String)

defined class Person

 

case classPerson(name: String)

Person("Spark")                                  //> res0: worksheetest.Person = Person(Spark)

1:case class 相对于java中的bean,val 只有个get

2:实例自动调用伴生对象

class Person

case classWorker(name: String,salary: Double) extends Person

case classStudent(name: String,score: Double) extends Person

 

def sayHi(person:Person)

{

    personmatch{

        case Student(name,score)=>println("I am Student :"+name +","+score)

        case Worker(name,salary)=>println("I am Worker :"+name +","+salary)

        case _ =>println("Unknown")

    }

}                                                 //> sayHi: (person: worksheetest.Person)Unit

 

sayHi(Worker("Worker",6.5))                      //> I am Worker :Worker,6.5

sayHi(Student("Student",6.5))                    //> I am Student :Student,6.5

 

DeployMessages源码中:

 caseclassExecutorStateChanged(

     appId:String,

     execId:Int,

     state:ExecutorState,

     message:Option[String],

     exitStatus:Option[Int])

extends DeployMessage

 

case class 使用时会生成很多对象

case object 本身就是一个实例,全局唯一

 

scala 的类型参数(重磅的东西)最好的难点,太有用了,在所有的spark源码中到处都是

例:RDD[T: ClassTag]

 

泛型,参数本身是有类型,scala的泛型,

泛型类和泛型函数

class Person[T](valcontent:T)

{

    def getContent(id: T) = id+" _ "+ content

}

val p = newPerson[String]("Spark")              //> p  :worksheetest.Person[String] = worksheetest$Person@50134894

p.getContent("Scala")                            //> res0: String = Scala _ Spark

下面就报错,类型不一致

 

上边界

<:

 

和下边界

 

隐士类型

implicit def rddToSequenceFileRDDFunctions[K <%Writable: ClassTag, V <% Writable: ClassTag](

在上下文中注入隐式值

而且注入的过程是自动

 

 

class Compare[T : Ordering](val n1 : T, val n2 :T){
def bigger (implicit ordered :Ordering[T]) = if (ordered.compare(n1,n2)>0)n1 else n2
}

new Compare[Int](5,2).bigger
new Compare[String]("Saprk","Hadoop").bigger
1624808917yy(1624808917) 20:55:58

Writable: ClassTag
1624808917yy(1624808917) 20:56:16

class Compare[T : Ordering](val n1: T, val n2: T){
def bigger(implicit ordered: Ordering[T]) = if(ordered.compare(n1, n2) > 0)n1 else n2 }

 

泛型前面有+和-

 

* scala> def mkArray[T : ClassTag](elems: T*) =Array[T](elems: _*)
* mkArray: [T](elems: T*)(implicit evidence$1:scala.reflect.ClassTag[T])Array[T]
*
* scala> mkArray(42, 13)
* res0: Array[Int] = Array(42, 13)
*
* scala> mkArray("Japan","Brazil","Germany")
* res1: Array[String] = Array(Japan, Brazil, Germany)
* }}}

 

协变:如果S是T的子类型,并且List[S]也是List[T]的子类型,那么成为协变 class Person[+T] //强制定义为协变类型

 

C[+T]:如果A是B的子类,那么C[A]是C[B]的子类。逆变范围小
C[-T]:如果A是B的子类,那么C[B]是C[A]的子类。协变 范围大
C[T]:无论A和B是什么关系,C[A]和C[B]没有从属关系。

 

作业:阅读Spark源码 RDD、HadoopRDD、SparkContext、Master、Worker的源码,并分析里面使用的所有的模式匹配和类型参数的内容

0 0
原创粉丝点击