理解变量的作用域
来源:互联网 发布:乐清市知临中学 编辑:程序博客网 时间:2024/05/23 11:47
理解变量的作用域:
(a)
scala> var counter=0
counter: Int = 0
scala> val data=Seq(1,2,3)
data: Seq[Int] = List(1, 2, 3)
scala> data.foreach(x => counter += x)
scala> println ("Counter value:"+counter)
Counter value:6
(b)
scala> var counter=0
counter: Int = 0
scala> val data =Seq(1,2,3)
data: Seq[Int] = List(1, 2, 3)
scala> var rdd =sc.parallelize(data)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[6] at parallelize at <console>:23
scala> rdd.foreach(x => counter += x)
16/07/08 07:29:21 INFO spark.SparkContext: Starting job: foreach at <console>:28
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Got job 2 (foreach at <console>:28) with 1 output partitions
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Final stage: ResultStage 4 (foreach at <console>:28)
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Submitting ResultStage 4 (ParallelCollectionRDD[6] at parallelize at <console>:23), which has no missing parents
16/07/08 07:29:21 INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 4.6 KB, free 34.2 KB)
16/07/08 07:29:21 INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 2.2 KB, free 36.4 KB)
16/07/08 07:29:21 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:49890 (size: 2.2 KB, free: 517.4 MB)
16/07/08 07:29:21 INFO spark.SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 4 (ParallelCollectionRDD[6] at parallelize at <console>:23)
16/07/08 07:29:21 INFO scheduler.TaskSchedulerImpl: Adding task set 4.0 with 1 tasks
16/07/08 07:29:21 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 4.0 (TID 3, localhost, partition 0,PROCESS_LOCAL, 2030 bytes)
16/07/08 07:29:21 INFO executor.Executor: Running task 0.0 in stage 4.0 (TID 3)
16/07/08 07:29:22 INFO executor.Executor: Finished task 0.0 in stage 4.0 (TID 3). 915 bytes result sent to driver
16/07/08 07:29:22 INFO scheduler.DAGScheduler: ResultStage 4 (foreach at <console>:28) finished in 0.761 s
16/07/08 07:29:22 INFO scheduler.DAGScheduler: Job 2 finished: foreach at <console>:28, took 0.901645 s
16/07/08 07:29:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 3) in 752 ms on localhost (1/1)
16/07/08 07:29:22 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
scala> println("Counter value:" + counter)
Counter value:0
(a)
scala> var counter=0
counter: Int = 0
scala> val data=Seq(1,2,3)
data: Seq[Int] = List(1, 2, 3)
scala> data.foreach(x => counter += x)
scala> println ("Counter value:"+counter)
Counter value:6
(b)
scala> var counter=0
counter: Int = 0
scala> val data =Seq(1,2,3)
data: Seq[Int] = List(1, 2, 3)
scala> var rdd =sc.parallelize(data)
rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[6] at parallelize at <console>:23
scala> rdd.foreach(x => counter += x)
16/07/08 07:29:21 INFO spark.SparkContext: Starting job: foreach at <console>:28
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Got job 2 (foreach at <console>:28) with 1 output partitions
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Final stage: ResultStage 4 (foreach at <console>:28)
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Missing parents: List()
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Submitting ResultStage 4 (ParallelCollectionRDD[6] at parallelize at <console>:23), which has no missing parents
16/07/08 07:29:21 INFO storage.MemoryStore: Block broadcast_4 stored as values in memory (estimated size 4.6 KB, free 34.2 KB)
16/07/08 07:29:21 INFO storage.MemoryStore: Block broadcast_4_piece0 stored as bytes in memory (estimated size 2.2 KB, free 36.4 KB)
16/07/08 07:29:21 INFO storage.BlockManagerInfo: Added broadcast_4_piece0 in memory on localhost:49890 (size: 2.2 KB, free: 517.4 MB)
16/07/08 07:29:21 INFO spark.SparkContext: Created broadcast 4 from broadcast at DAGScheduler.scala:1006
16/07/08 07:29:21 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 4 (ParallelCollectionRDD[6] at parallelize at <console>:23)
16/07/08 07:29:21 INFO scheduler.TaskSchedulerImpl: Adding task set 4.0 with 1 tasks
16/07/08 07:29:21 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 4.0 (TID 3, localhost, partition 0,PROCESS_LOCAL, 2030 bytes)
16/07/08 07:29:21 INFO executor.Executor: Running task 0.0 in stage 4.0 (TID 3)
16/07/08 07:29:22 INFO executor.Executor: Finished task 0.0 in stage 4.0 (TID 3). 915 bytes result sent to driver
16/07/08 07:29:22 INFO scheduler.DAGScheduler: ResultStage 4 (foreach at <console>:28) finished in 0.761 s
16/07/08 07:29:22 INFO scheduler.DAGScheduler: Job 2 finished: foreach at <console>:28, took 0.901645 s
16/07/08 07:29:22 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 4.0 (TID 3) in 752 ms on localhost (1/1)
16/07/08 07:29:22 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks have all completed, from pool
scala> println("Counter value:" + counter)
Counter value:0
0 0
- 理解变量的作用域
- 理解JavaScript的变量,变量作用域,作用域链
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 理解PHP变量的作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- php理解变量的作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- 深入理解JavaScript的变量作用域
- JavaScript的变量作用域深入理解
- 吐槽2个合作公司
- leetcode之二分法
- 华为PPPOE的配置
- Codeforces Round #355 (Div. 2) E
- html5+css 三列布局
- 理解变量的作用域
- LeetCode 067 Add Binary
- VC之fread函数和fwrite函数
- 我的梦想是架构师,我不要当程序猿!
- 每日一练——Minimum Window Substring
- Android 动画
- 实用代码块记录4
- 学徒浅析Android开发:杂谈——WebView的url跳转时方法执行顺序
- Android之简单拨号器的实现