python基于Hadoop Streaming实现简单的WordCount

来源:互联网 发布:亚虎网络娱乐网页版 编辑:程序博客网 时间:2024/05/24 04:43

1. Eclipse下配置python插件PyDev

参考:http://blog.chinaunix.net/uid-11121450-id-1476897.html

 

2. Hadoop Streaming编程

Hadoop Streaming是利用“标准输入”和“标准输出”与我们编写的Map和Reduce进行数据的交换。
那么,任何能够使用“标准输入”和“标准输出”的编程语言都应该可以用来编写MapReduce程序。
 
3. Python实现简单的Word Count程序
需要注意python中对分隔符的要求比较严格,尽量不要混用tab和空格,很容易导致编译失败
mapper.py
#!/usr/bin/pythonimport sys for line in sys.stdin: #去除字符串两边的空格  line = line.strip()  #按照空格去划分单词 words = line.split()  for word in words:   print '%s %s' % (word, 1)
 
reducer.py
#!/usr/bin/python from operator import itemgetterimport sys word2count = {} for line in sys.stdin:   line = line.strip()    word, count = line.split(' ', 1)      try:    count = int(count)    word2count[word] = word2count.get(word, 0) + count   except ValueError:          pass    sorted_word2count = sorted(word2count.items(), key=itemgetter(0))   for word, count in sorted_word2count:    print '%s %s'% (word, count)
 
 
4. 在hadoop上运行python脚本
#在Hadoop Home路径下运行以下命令
 bin/hadoop jar contrib/streaming/hadoop-streaming-1.0.0.jar -mapper /usr/local/mapper.py -reducer /usr/local/reducer.py -input /test.txt -output /result
0 0
原创粉丝点击