hadoop 开发调试环境-eclipse配置记录
来源:互联网 发布:政府的顶级域名 编辑:程序博客网 时间:2024/06/05 06:17
一 目标
基于前面安装的hadoop单机伪分布环境安装配置eclipse和eclipse hadoop 插件,并运行wordcount程序检验是否正确。参考了网上的资料。
二 配置过程
1. 准备软件
eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gzhadoop-eclipse-plugin-2.2.0.jar2.安装
1)运行tar - xvf eclipse-jee-juno-SR2-linux-gtk-x86_64.tar.gz 解压,然后用mv命令将其搬到/usr/local/eclipse
2)将hadoop-eclipse-plugin-2.2.0.jar拷贝到eclipse下的plugins目录
oliver@bigdatadev:/usr/local/eclipse/plugins$ pwd/usr/local/eclipse/pluginsoliver@bigdatadev:/usr/local/eclipse/plugins$ ls hadoop-eclipse*hadoop-eclipse-plugin-2.2.0.jar重新启动eclipse,能在ProjectEcplorer下看到DFS Locations 则表示插件加载成功。
3.配置
1)点window->show viwe->Other...->菜单,弹出对话框,选择MapReduce Tools->Map/Reduce Locations,然后点击OK
2) 点window->Open Perspactive->Other...->菜单,弹出对话框,选择Map/Reduce,然后点击OK
3)邮件点击Map/Reduce Locations的空白区域,弹出菜单
4)选择New Hadoop location... 弹出对话框
第一页
第二页,把fs.defaultFS填好与core-site.xml中的内容保持一致
5)检查能连接上hdfs就可以了,先要启动hdfs(start-all.sh)
4.运行wordcount示例程序
新建工程选择Map/Reduce Project
示例代码
package demo.n02.hadoop;import java.io.*;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;public class WordCount{ public static class WordCountMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String[] words = value.toString().split(" "); for(String str: words) { word.set(str); context.write(word, one); } } } public static class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> { public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int total = 0; for(IntWritable val: values) { total++; } context.write(key, new IntWritable(total)); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf, "word count"); job.setJarByClass(WordCount.class); job.setMapperClass(WordCountMapper.class); job.setReducerClass(WordCountReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true)?0:1); }}定义运行或调试
运行不出错,配置成功。
0 0
- hadoop 开发调试环境-eclipse配置记录
- Eclipse下hadoop开发调试环境配置笔记
- eclipse hadoop开发环境配置
- eclipse hadoop开发环境配置
- eclipse hadoop开发环境配置
- hadoop-eclipse开发环境配置
- eclipse hadoop开发环境配置
- eclipse hadoop开发环境配置
- eclipse hadoop开发环境配置
- eclipse hadoop开发环境配置
- eclipse Hadoop开发环境配置
- eclipse hadoop开发环境配置
- os上的hadoop执行环境及eclipse嵌入hadoop开发环境配置记录
- Hadoop学习全程记录——eclipse hadoop开发环境配置(2)(修改)
- Hadoop-1.2.1 Eclipse开发环境配置
- eclipse配置hadoop mapreduce开发环境
- 配置Hadoop开发环境(Eclipse)
- 配置Hadoop开发环境(Eclipse)
- Docker 系列之 Linux 下安装篇
- Bitmap、CBitmap、HBITMAP以及BITMAP的相互转换
- android中的文件操作详解以及内部存储和外部存储
- 【SDOI2013】保护出题人
- UICollectionView 使用
- hadoop 开发调试环境-eclipse配置记录
- 设计模式-桥接模式
- position:static | relative | absolute | fixed
- UVA 10494 If We Were a Child Again
- dp 硬币找零问题
- UIKit框架各个类的简要说明
- 行百里者半九十,华为开发者大赛各参赛团队渐入佳境
- 协议简介之 应用层协议
- Matlab保存图片的几种方法