MapReduce: Simplified Data Processing on Large Clusters 论文笔记

来源:互联网 发布:王家卫表白方式 知乎 编辑:程序博客网 时间:2024/05/22 13:53

Why do it

The issues of how to parallelize the computation, distribute the data, and handle failures conspire to obscure the original simple computation with large amounts of complex code to deal with these issues.

Programming Model

Map

Map, written by the user, takes an input pair and produces a set of intermediate key/value pairs. The MapReduce library groups together all intermediate values associated with the same intermediate key I and passes them to the Reduce function.

Reduce

The Reduce function, also written by the user, accepts an intermediate key I and a set of values for that key. It merges together these values to form a possibly smaller set of values. The intermediate values are supplied to the user’s reduce function via an iterator. This allows us to handle lists of values that are too large to fit in memory.

Execution overview

MapDeduce

Conclusions

why this model is success

  1. the model is easy to use, even for programmers without experience with parallel and distributed systems, since it hides the details of parallelization, fault-tolerance, locality optimization, and load balancing.
  2. a large variety of problems are easily expressible as MapReduce computations.
  3. we have developed an implementation of MapReduce that scales to large clusters of machines comprising thousands of machines

Experiences

  1. restricting the programming model makes it easy to parallelize and distribute computations and to make such computations fault-tolerant.
  2. network bandwidth is a scarce resource, the locality optimization allows us to read data from local disks, and writing a single copy of the intermediate data to local disk saves network bandwidth.
  3. redundant execution can be used to reduce the impact of slow machines, and to handle machine failures and data loss.
0 0
原创粉丝点击