scala常用collection总结
来源:互联网 发布:如何读取股票数据 编辑:程序博客网 时间:2024/06/08 06:51
1.数组
1)声明
val arr =new Array[int](3)
初始化
arr(0)=1
arr(1)=2
arr(2)=3
声明及初始化
val arr=Array(12,23,56)
2.列表
1)声明
val li=List(1,2,3)
2)使用
li(0)=1 li(1)=2 li(2)=3
val tuple=("spark",12.30,125)
2)使用:序号从1开始
tuple._1="spark" tuple._3=125
不可变集:
scala> var myset=Set("spark",12.05,125)
myset: scala.collection.immutable.Set[Any] = Set(spark, 12.05, 125)
scala> myset+= "hadoop"
scala> myset
res3: scala.collection.immutable.Set[Any] = Set(spark, 12.05, 125, hadoop)
如果要声明一个可变集,则需要引入scala.collection.mutable.Set包
scala> import scala.collection.mutable.Set
import scala.collection.mutable.Set
scala> val myset=Set("spark",12,'a')
myset: scala.collection.mutable.Set[Any] = Set(12, spark, a)
scala> myset+= "hadoop"
res4: myset.type = Set(12, spark, a, hadoop)
scala> myset
res5: scala.collection.mutable.Set[Any] = Set(12, spark, a, hadoop)
scala> val uni=Map("Pku"->"PekingUniversity","Thu"->"TinghuaUniversity")
uni: scala.collection.immutable.Map[String,String] = Map(Pku -> PekingUniversity, Thu -> TinghuaUniversity)
scala> uni("Pku")
res8: String = PekingUniversity
scala> uni("PekingUniversity")
java.util.NoSuchElementException: key not found: PekingUniversity
at scala.collection.immutable.Map$Map2.apply(Map.scala:129)
... 29 elided
scala> uni.contains("Pku")
res10: Boolean = true
2)可变映射
scala> import scala.collection.mutable.Map
import scala.collection.mutable.Map
scala> val university=Map("PKU"->"Peking","THU"->"Thinghua")
university: scala.collection.mutable.Map[String,String] = Map(THU -> Thinghua, PKU -> Peking)
scala> university+=("FD"->"Fudan")
res11: university.type = Map(FD -> Fudan, THU -> Thinghua, PKU -> Peking)
scala> for((k,v)<-university) printf("%s mean %s\n",k,v)
FD mean Fudan
THU mean Thinghua
PKU mean Peking
iter: Iterator[String] = non-empty iterator
scala> while(iter.hasNext) println(iter.next())
spark
hadoop
storm
hive
scala> val iter=Iterator("spark","hadoop","storm","hive")
iter: Iterator[String] = non-empty iterator
scala> for(elem<-iter) println(elem)
spark
hadoop
storm
hive
myseq: Seq[Int] = List(1, 2, 3)
scala> myseq(0)
res27: Int = 1
scala> myseq.apply(0)
res28: Int = 1
1)声明
val arr =new Array[int](3)
初始化
arr(0)=1
arr(1)=2
arr(2)=3
声明及初始化
val arr=Array(12,23,56)
2.列表
1)声明
val li=List(1,2,3)
2)使用
li(0)=1 li(1)=2 li(2)=3
3.元组
元组是不同类型值的集合
1)声明val tuple=("spark",12.30,125)
2)使用:序号从1开始
tuple._1="spark" tuple._3=125
4.集(Set)
由不重复的元素组成,分为可变和不可变集,缺省时是不可变集
1)声明不可变集:
scala> var myset=Set("spark",12.05,125)
myset: scala.collection.immutable.Set[Any] = Set(spark, 12.05, 125)
scala> myset+= "hadoop"
scala> myset
res3: scala.collection.immutable.Set[Any] = Set(spark, 12.05, 125, hadoop)
如果要声明一个可变集,则需要引入scala.collection.mutable.Set包
scala> import scala.collection.mutable.Set
import scala.collection.mutable.Set
scala> val myset=Set("spark",12,'a')
myset: scala.collection.mutable.Set[Any] = Set(12, spark, a)
scala> myset+= "hadoop"
res4: myset.type = Set(12, spark, a, hadoop)
scala> myset
res5: scala.collection.mutable.Set[Any] = Set(12, spark, a, hadoop)
5.映射(Map)
映射指一系列的键值对组合,分为可变和不可变,默认时不可变,若要声明可变映射,需要引用scala.collection.mustable.Map
1)不可变映射scala> val uni=Map("Pku"->"PekingUniversity","Thu"->"TinghuaUniversity")
uni: scala.collection.immutable.Map[String,String] = Map(Pku -> PekingUniversity, Thu -> TinghuaUniversity)
scala> uni("Pku")
res8: String = PekingUniversity
scala> uni("PekingUniversity")
java.util.NoSuchElementException: key not found: PekingUniversity
at scala.collection.immutable.Map$Map2.apply(Map.scala:129)
... 29 elided
scala> uni.contains("Pku")
res10: Boolean = true
2)可变映射
scala> import scala.collection.mutable.Map
import scala.collection.mutable.Map
scala> val university=Map("PKU"->"Peking","THU"->"Thinghua")
university: scala.collection.mutable.Map[String,String] = Map(THU -> Thinghua, PKU -> Peking)
scala> university+=("FD"->"Fudan")
res11: university.type = Map(FD -> Fudan, THU -> Thinghua, PKU -> Peking)
scala> for((k,v)<-university) printf("%s mean %s\n",k,v)
FD mean Fudan
THU mean Thinghua
PKU mean Peking
6.迭代器 Iterator
不是一种集合,但是提供了一种访问集合的方法。有两种操作:hadNext检测是否还有下一个元素,next()返回下一个元素
scala> val iter=Iterator("spark","hadoop","storm","hive")iter: Iterator[String] = non-empty iterator
scala> while(iter.hasNext) println(iter.next())
spark
hadoop
storm
hive
scala> val iter=Iterator("spark","hadoop","storm","hive")
iter: Iterator[String] = non-empty iterator
scala> for(elem<-iter) println(elem)
spark
hadoop
storm
hive
7.Seq
有顺序的迭代器
scala> val myseq=Seq(1,2,3)myseq: Seq[Int] = List(1, 2, 3)
scala> myseq(0)
res27: Int = 1
scala> myseq.apply(0)
res28: Int = 1
0 0
- scala常用collection总结
- 常用Collection总结
- scala常用语法总结
- scala -Collection
- scala-collection
- 容器类(collection)常用方法总结
- scala collection笔记
- Scala之集合Collection
- Scala 集合(Collection)
- scala 的collection
- Scala Immutable Collection(集合)
- scala Collection 运算符
- scala总结(3)-- scala中常用类型继承关系
- Scala常用基础语法总结(一)
- Collection总结:
- Collection总结
- COLLECTION总结
- Collection总结
- From the classroom into the world
- 如何构建自己的笔记系统?
- spring注解使用不当产生的一个Bug
- SYD8801低功耗【深度睡眠模式】【浅度睡眠模式】【进入睡眠模式后要等待硬件进入睡眠】【内部上拉电阻对功耗的影响】【测试低功耗步骤】
- 微软工程师:构建强大的实时流式应用选择Apache Calcite
- scala常用collection总结
- ARM初识
- 动态规划—04公共子序列
- 【jzoj5069】【GDSOI2017第二轮模拟】【蛋糕】【莫比乌斯反演】【杜教筛】
- 自定义水波纹加载进度动画--葫芦
- (GIS可视化)热力图
- IntelliJ IDEA 搭建SSH框架详细步骤
- 线性代数-Gilbert Strang(第二部分)
- 腾讯课堂之Vue 学习