Ubuntu16.04搭建hadoop开发环境

来源:互联网 发布:俊知地产 编辑:程序博客网 时间:2024/06/06 03:50

jdk

  • 下载
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
  • 解压
sudo tar -zxvf jdk-8u141-linux-x64.tar.gz -C /usr/local/
  • 设置环境变量
sudo vim /etc/profile# 添加以下export JAVA_HOME=/usr/local/jdk1.8.0_141export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin# 立即生效source /etc/profile

添加用户组

  • 创建
sudo addgroup hadoopsudo adduser -ingroup  hadoop hadoop
  • 添加权限
sudo vim /etc/sudoers# 添加以下内容hadoop  ALL=(ALL:ALL) ALL

hadoop

  • 下载
http://hadoop.apache.org/releases.html
  • 解压
sudo tar -zxvf hadoop-2.7.3.tar.gz -C /usr/local
  • 环境变量
sudo vim /etc/profile# 添加以下export HADOOP_HOME=/usr/local/hadoop-2.7.3export PATH=$PATH:$HADOOP_HOME/binexport HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/nativeexport HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"# 立即生效source /etc/profilecd /usr/local/hadoop-2.7.3/etc/hadoop/sudo gedit hadoop-env.shexport JAVA_HOME=/usr/local/jdk1.8.0_141
  • 测试
cd /usr/local/hadoop-2.7.3sudo mkdir inputsudo cp README.txt input/sudo bin/hadoop jar share/hadoop/mapreduce/sources/hadoop-mapreduce-examples-2.7.3-sources.jar  org.apache.hadoop.examples.WordCount input output

ssh

  • 安装
sudo apt-get install openssh-server
  • 启动
sudo /etc/init.d/ssh start
  • 查看
ps -e | grep ssh
  • 生成秘钥
ssh-keygen -t rsa -P ""cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorizd_keys
  • 设置root登录
sudo gedit /etc/ssh/sshd_config # 修改如下PasswordAuthentication yes PermitRootLogin yes RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile %h/.ssh/authorized_keys# 生效service sshd restart
  • 登录
ssh localhost

搭建伪分布式

  • 创建文件夹
mkdir tmpmkdir dfsmkdir dfs/namemkdir dfs/data
tmp是用来存放零时文件,比例运行过程中的文件等。namenode和datanode文件夹默认是放在tmp里面的,这2个文件夹用来存储hdfs里的内容。 不配置的话,hadoop默认把tmp会创建在ubuntu系统里的/tmp文件夹里,电脑一旦重启会自动清除tmp文件夹内容,同时也清除了里面的namenode和datanode文件内容,这样就会造成每次重启电脑namenode和datanode内容都不在了,那就需要重写格式化Hadoop文件系统hdfs,以前运行的记录和文件都会没有。所有配置了tmp和namenode和datanode文件夹,重启后可以不用格式化,原文件依然保持在hadoop文件系统上,只是放在了自己的目录里。
  • 配置core-site.xml文件
cd /usr/local/hadoop-2.7.3/etc/hadoopsudo vim core-site.xml# 添加如下<configuration>    <property>        <name>fs.defaultFS</name>        <value>hdfs://localhost:9009</value>    </property>    <property>        <name>hadoop.tmp.dir</name>        <value>/usr/local/hadoop-2.7.3/tmp</value>    </property></configuration>
  • 配置hdfs-site.xml文件
<configuration>    <property>        <name>dfs.replication</name>        <value>1</value>    </property>    <property>        <name>dfs.namenode.name.dir</name>        <value>/usr/local/hadoop-2.7.3/dfs/name</value>    </property>    <property>        <name>dfs.datanode.data.dir</name>        <value>/usr/local/hadoop-2.7.3/dfs/data</value>    </property></configuration>
  • hdfs
# 每次运行之前删除掉tmp下的文件和dfs下name和data中的文件rm -fr tmp/*rm -fr dfs/name/*rm -fr dfs/data/*
sudo chown -R qihao:qihao hadoop-2.7.3/bin/hdfs namenode -format sbin/start-dfs.sh jps# 一开始我的9000端口被占用,NameNode一直没有出来,改成9009之后就好了114371 NameNode115619 NodeManager115317 ResourceManager114711 SecondaryNameNode115658 Jps114522 DataNode

配置eclipse

  • 下载插件并放到eclipse的plugins文件夹下
http://download.csdn.net/detail/qq_33096883/9906964
  • 配置hadoop主目录
在eclipse的Windows->Preferences的Hadoop Map/Reduce中设置安装目录

这里写图片描述
* 配置插件

打开Windows->Open Perspective中的Map/Reduce,在此perspective下进行hadoop程序开发
打开Windows->Show View->Other->MapRduce Tools->Map/Reduce Locations,选择New Hadoop location…新建hadoop连接如下图

这里写图片描述

Location name和Host填写localhost,Map/Reduce Master的端口号必须和Mapred-site.xml中的HDFS配置端口号一致,这里填写9001,DFS Master填写HDFS的端口号必须和core-site.xml中的HDFS配置端口一致,这里填写9009,User name为Hadoop的所有者用户名,即安装Hadoop的Linux用户,这里为qihao

测试

  • 新建Map/Reduce工程
src——>new——>other可以在工程中建立Map类,Reduce类,以及MapReduceDriver类,向导会自动生成3个类的框架,向里面填写相关代码,之后点击MapReduceDriver类——>Run on hadoop来运行Hadoop应用
  • Map代码
package com.qihao;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.LongWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Mapper;public class MyMap extends Mapper<LongWritable, Text, Text, IntWritable> {    private final static IntWritable one = new IntWritable(1);    private Text word = new Text();    public void map(LongWritable ikey, Text ivalue, Context context) throws IOException, InterruptedException {        StringTokenizer itr = new StringTokenizer(ivalue.toString());        while (itr.hasMoreElements()) {            word.set(itr.nextToken());            context.write(word, one);        }    }}
  • Reduce代码
package com.qihao;import java.io.IOException;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Reducer;public class MyReduce extends Reducer<Text, IntWritable, Text, IntWritable> {    public void reduce(Text _key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {        // process values        int sum = 0;          for (IntWritable val : values) {              sum += val.get();          }          context.write(_key, new IntWritable(sum));      }}
  • 主程序
package com.qihao;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;public class MyRun {    public static void main(String[] args) throws Exception {        Configuration conf = new Configuration();        String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();        if (otherArgs.length != 2) {            System.err.println("Usage: Wordcount <in> <out>");            System.exit(2);        }        Job job = Job.getInstance(conf, "JobName");        job.setJarByClass(com.qihao.MyRun.class);        // TODO: specify a mapper        job.setMapperClass(MyMap.class);        // TODO: specify a reducer        job.setReducerClass(MyReduce.class);        // TODO: specify output types        job.setOutputKeyClass(Text.class);        job.setOutputValueClass(IntWritable.class);        // TODO: specify input and output DIRECTORIES (not files)        FileInputFormat.setInputPaths(job, new Path(otherArgs[0]));        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));        if (!job.waitForCompletion(true))            return;    }}
  • 工程配置
    这里写图片描述
    这里写图片描述

参考

  • http://hadoop.apache.org/docs/r2.6.4/hadoop-project-dist/hadoop-common/SingleCluster.html#Standalone_Operation
  • http://blog.csdn.net/xummgg/article/details/51173072
  • http://www.linuxidc.com/Linux/2015-08/120943.htm
  • http://blog.csdn.net/twlkyao/article/details/17578541