用 Hadoop 进行分布式并行编程(四)Java远程调用Hadoop服务

来源:互联网 发布:意大利面 知乎 编辑:程序博客网 时间:2024/06/05 10:39

    前面几篇都是在Hadoop环境中,使用Hadoop工具进行MapReduce计算。本篇介绍在Java应用中如何利用Hadoop服务进行MapReduce计算。

一、安装配置Hadoop

1、解压Hadoop

$tar zxvf hadoop-1.2.1-bin.tar.gz -C /usr/local/app/hadoop

2、配置Hadoop环境

修改/etc/profile信息:

export JAVA_HOME=/usr/local/app/jdk1.6.0_45export JRE_HOME=$JAVA_HOME/jreexport HADOOP_HOME=/usr/local/app/hadoop/hadoop-1.2.1export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATHexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
更新系统环境配置:

$source  /etc/profile

3、配置Hadoop服务

配置$HADOOP_HOME/conf/core-site.xml

<configuration>  <property>    <name>fs.default.name</name>    <value>hdfs://192.168.242.128:9000</value>//建议配置成机器域名或ip    <final>true</final>  </property><pre name="code" class="html">  <property>    <name>hadoop.tmp.dir</name>    <value>/home/guzicheng/hadoop/tmp</value>  </property></configuration>

配置$HADOOP_HOME/conf/hdfs-site.xml

<configuration><pre name="code" class="html"><pre name="code" class="html">  <property>    <name>dfs.name.dir</name>    <value>/home/guzicheng/hadoop/hdfs/name</value>    <final>true</final>  </property>

<pre name="code" class="html">  <property>    <name>dfs.data.dir</name>    <value>/home/guzicheng/hadoop/hdfs/data</value>    <final>true</final>  </property><pre name="code" class="html"><pre name="code" class="html">  <property>    <name>dfs.replication</name>    <value>3</value>  </property></configuration>
配置$HADOOP_HOME/conf/mapred-site.xml
<configuration><pre name="code" class="html"><pre name="code" class="html">  <property>    <name>mapred.job.tracker</name>    <value>192.168.242.128:9001</value>    <final>true</final>  </property></configuration>

4、启动Hadoop服务

进入$HADOOP_HOME/bin目录

$./start-all.sh
启动过程中会提示输入系统用户的密码,输入密码回车即可。

启动成功后会出现NameNode、SecondaryNameNode、DataNode、JobTracker、TaskTracker这5个进程,可以输入如下命令查看:

$jps

二、编写Java客户端

1、Maven配置

<dependency>    <groupId>org.apache.hadoop</groupId>    <artifactId>hadoop-core</artifactId>    <version>1.2.1</version></dependency>

2、Java代码

@Servicepublic class WordCountServiceImpl implements WordCountService{  //测试代码  public static void main(String[] args)  {    try {        new WordCountServiceImpl().wordCount();    } catch (Exception e){       e.printStackTrace();    }  }  public int wordCount() throws Exception {    Configuration conf = new Configuration();    conf.set("fs.default.name", "hdfs://192.168.242.128:9000");//设置hadoop服务地址    Job job = new Job(conf, "word count");    job.setJarByClass(WordCount.class);    job.setMapperClass(TokenizerMapper.class);    job.setCombinerClass(IntSumReducer.class);    job.setReducerClass(IntSumReducer.class);    job.setOutputKeyClass(Text.class);    job.setOutputValueClass(IntWritable.class);    FileInputFormat.addInputPath(job, new Path("input"));//设置hdfs输入路径,hdfs://192.168.242.128:9000/user/guzicheng/input    FileOutputFormat.setOutputPath(job, new Path("output"));//设置hdfs输出路径,hdfs://192.168.242.128:9000/user/guzicheng/output    return job.waitForCompletion(true) ? 0 : 1;  }  public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable>  {    private IntWritable result = new IntWritable();    public void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, IntWritable>.Context context)      throws IOException, InterruptedException    {      int sum = 0;      for (IntWritable val : values) {        sum += val.get();      }      this.result.set(sum);      context.write(key, this.result);    }  }  public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>  {    private static final IntWritable one = new IntWritable(1);    private Text word = new Text();    public void map(Object key, Text value, Mapper<Object, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException    {      StringTokenizer itr = new StringTokenizer(value.toString());      while (itr.hasMoreTokens()) {        this.word.set(itr.nextToken());        context.write(this.word, one);      }    }  }}


三、问题汇总

1、连接失败

    异常信息:

java.net.ConnectException: Call to 192.168.242.128/192.168.242.128:9000 failed on connection exception: java.net.ConnectException: Connection refused: no further information        at org.apache.hadoop.ipc.Client.wrapException(Client.java:1142)        ……

   问题原因:

    1)Hadoop服务配置问题

         配置Hadoop服务的时候,指定的服务地址为本机域名或地址(localhost或127.0.0.1),如:

  <property>    <name>fs.default.name</name>    <value>hdfs://127.0.0.1:9000</value>    <final>true</final>  </property>

    2)服务器防火墙

        Hadoop服务端口被防火墙阻挡。


   解决方法:

    1)修改Hadoop服务配置

  <property>    <name>fs.default.name</name>    <value>hdfs://192.168.242.128:9000</value>    <final>true</final>  </property>

    2)服务器防火墙

        打开Linux端口9000(namenode rpc交互端口)、50010(datanode交互端口)、50030(job tracker交互端口)。

        修改/etc/sysconfig/iptables

<pre name="code" class="plain">-A INPUT -m state --state NEW -m tcp -p tcp --dport 9000 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 50010 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 50030 -j ACCEPT

       重启防火墙

$service iptables restart

2、文件权限问题

    环境:java客户端运行环境为windows,hadoop服务运行环境为linux

    异常信息:

java.io.IOException: Failed to set permissions of path: \tmp\……        at org.apache.hadoop.fs.FileUtil.checkReturnValue(FileUtil.java:691)        ……

   问题原因:Windows下的文件权限问题导致,在Linux下不存在

   解决方法:注释掉FileUtil.checkReturnValue()方法,如下

  ……  private static void checkReturnValue(boolean rv, File p, FsPermission permission)    throws IOException  {    /**    if (!rv)      throw new IOException(new StringBuilder().append("Failed to set permissions of path: ").append(p).append(" to ").        append(String.format("%04o", new Object[] { Short.valueOf(permission.toShort()) })).toString());    **/ }……
   重新编译打包的hadoop-1.2.1.jar


参考:

《hadoop集群默认配置和常用配置》


0 0