oozie安装及简单使用

来源:互联网 发布:编程 网页开发 编辑:程序博客网 时间:2024/05/24 07:42

Oozie安装包好小。。。。通过编译在线下载的

  1. 安全模式理解

  2. 离开安全模式方法
    bin/Hadoop dfsadmin -safemode leave

问题导读:

1.怎样安装Oozie?
2.怎样配置任务流程?
3.spark 提交和spark on yarn 方式的区别是什么?

软件版本:
Oozie4.2.0,Hadoop2.6.0,Spark1.4.1,Hive0.14,Pig0.15.0,Maven3.2,JDK1.7,zookeeper3.4.6,HBase1.1.2,MySQL5.6

集群部署:

node1~4.centos.com node1~4 192.168.0.31~34 1G*4 内存 1核*4 虚拟机

node1:NameNode 、ResourceManager;

node2:SecondaryNameNode、Master、HMaster、HistoryServer、JobHistoryServer

node3:oozie-server(tomcat)、DataNode、NodeManager、HRegionServer、Worker、QuorumPeerMain

node4:DataNode、NodeManager、HRegionServer、Worker、Pig client、Hive Client、HiveServer2、QuorumPeerMain、mysql

  1. 编译Oozie4.2.0

1.1 编译环境准备

使用tomcat7,而不是tomcat6的下载地址:

1)下载压缩包oozie-4.2.0.tar.gz,并解压缩到/usr/local/oozie目录

2)修改pom.xml

/usr/local/oozie/oozie-4.2.0/distro/pom.xml

export MAVEN_HOME=/opt/maven  export PATH=$PATH:$MAVEN_HOME/bin  

保存退出后执行命令:
[html] view plain copy
source /etc/profile

修改maven setting.xml, 使用开源中国的镜像,加快速度(该镜像不是很全,若发现某些库被屏蔽了建议修改mirrorOf项,这里不用关心)
[html] view plain copy

<mirror>        <id>nexus-osc</id>        <name>OSChina Central</name>                                                                                     <url>http://maven.oschina.net/content/groups/public/</url>        <mirrorOf>*</mirrorOf>  </mirror>  

下载oozie的master(为4.3.0-SNAPSHOT源码,解压)
[html] view plain copy
cd ~/download
git https://github.com/apache/oozie.git
cd oozie

2.4)修改主目录中的pom.xml,有以下位置要改:
1.8

    <property>      <name>hadoop.proxyuser.[USER].hosts</name>      <value>*</value>    </property>    <property>      <name>hadoop.proxyuser.[USER].groups</name>      <value>*</value>    </property>

其中,[USER]需要改为后面启动oozie tomcat的用户

不重启hadoop集群,而使配置生效

[Bash shell] 纯文本查看 复制代码
hdfs dfsadmin -refreshSuperUserGroupsConfiguration
yarn rmadmin -refreshSuperUserGroupsConfiguration

1.4 配置Oozie
(由于是在node3上部署oozie,所以把下面的压缩包拷贝到node3上)

1) 取得压缩包:

[Bash shell] 纯文本查看 复制代码
oozie-4.2.0/distro/target/oozie-4.2.0-distro.tar.gz

2) 解压缩:

[Bash shell] 纯文本查看 复制代码
tar -zxf oozie-4.2.0-distro.tar.gz

3)在oozie-4.2.0目录下新建libext目录,并把ext-2.2.zip 拷贝到该目录下;并拷贝hadoop相关jar包到该目录下

[Bash shell] 纯文本查看 复制代码
cp $HADOOP_HOME/share/hadoop//.jar libext/
cp $HADOOP_HOME/share/hadoop//lib/.jar libext/

把hadoop与tomcat冲突jar包去掉

[Bash shell] 纯文本查看 复制代码

mv servlet-api-2.5.jar servlet-api-2.5.jar.bak
mv jsp-api-2.1.jar jsp-api-2.1.jar.bak
mv jasper-compiler-5.5.23.jar jasper-compiler-5.5.23.jar.bak
mv jasper-runtime-5.5.23.jar jasper-runtime-5.5.23.jar.bak

拷贝mysql驱动到该目录下(使用mysql数据库,默认是derby)

[Bash shell] 纯文本查看 复制代码
?
1 scp mysql-connector-java-5.1.25-bin.jar node3:/usr/oozie/oozie-4.2.0/libext/

4)配置数据库连接,文件是conf/oozie-site.xml

[XML] 纯文本查看 复制代码

<property>      <name>oozie.service.JPAService.create.db.schema</name>      <value>true</value>  </property>  <property>      <name>oozie.service.JPAService.jdbc.driver</name>      <value>com.mysql.jdbc.Driver</value>  </property>  <property>      <name>oozie.service.JPAService.jdbc.url</name>      <value>jdbc:mysql://master:3306/oozie?createDatabaseIfNotExist=true</value>  </property>  <property>      <name>oozie.service.JPAService.jdbc.username</name>      <value>hadoop</value>  </property>  <property>      <name>oozie.service.JPAService.jdbc.password</name>      <value>hadoop</value>  </property>  <property>      <name>oozie.service.HadoopAccessorService.hadoop.configurations</name>      <value>*=/usr/hadoop/hadoop-2.6.0/etc/hadoop</value>  </property>

最后一个配置,是需要配置的,不然后面运行调度的时候,任务会报File /user/root/share/lib does not exist 的错误

5)启动前的初始化

a. 打war包

bin/oozie-setup.sh prepare-war

b. 初始化数据库

bin/ooziedb.sh create -sqlfile oozie.sql -run

c. 修改oozie-4.2.0/oozie-server/conf/server.xml文件,注释掉下面的记录

<!--<Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" />-->

d. 上传jar包

bin/oozie-setup.sh sharelib create -fs hdfs://node1:8020

1.5 启动

bin/oozied.sh start

  1. 流程实例

数据为:bank.csv ,并已经上传到hdfs://node1:8020/user/root/bank.csv ,可以在http://zeppelin-project.org/docs/tutorial/tutorial.html页面下载该数据
(当执行Hive、Pig任务的时候需要把第一行数据删除)

默认所有操作用户都是hadoop,如果是其他用户,可能需要修改对应的目录

配置环境变量:export OOZIE_URL=http://node3:11000/oozie

2.1 MR任务流程

  1. job.properties :

[Plain Text] 纯文本查看 复制代码

oozie.wf.application.path=hdfs://node1:8020/user/root/workflow/mr_demo/wf  #Hadoop"R  jobTracker=node1:8032  #Hadoop"fs.default.name  nameNode=hdfs://node1:8020/  #Hadoop"mapred.queue.name  queueName=default
  1. workflow.xml

[XML] 纯文本查看 复制代码

<workflow-app xmlns="uri:oozie:workflow:0.2" name="map-reduce-wf">      <start to="mr-node"/>      <action name="mr-node">          <map-reduce>              <job-tracker>${jobTracker}</job-tracker>              <name-node>${nameNode}</name-node>              <prepare>                  <delete path="${nameNode}/user/${wf:user()}/workflow/mr_demo/output"/>              </prepare>              <configuration>                  <property>                      <name>mapred.job.queue.name</name>                      <value>${queueName}</value>                  </property>                  <property>                      <name>mapreduce.mapper.class</name>                      <value>org.apache.hadoop.examples.WordCount$TokenizerMapper</value>                  </property>                  <property>                      <name>mapreduce.reducer.class</name>                      <value>org.apache.hadoop.examples.WordCount$IntSumReducer</value>                  </property>                  <property>                      <name>mapred.map.tasks</name>                      <value>1</value>                  </property>                  <property>                      <name>mapred.input.dir</name>                      <value>/user/${wf:user()}/bank.csv</value>                  </property>                  <property>                      <name>mapred.output.dir</name>                      <value>/user/${wf:user()}/workflow/mr_demo/output</value>                  </property>              </configuration>          </map-reduce>          <ok to="end"/>          <error to="fail"/>      </action>      <kill name="fail">          <message>Map/Reduce failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>      </kill>      <end name="end"/>  </workflow-app>
  1. 运行:

1)拷贝workflow.xml文件到HDFS的 hdfs://node1:8020/user/root/workflow/mr_demo/wf/workflow.xml 目录;

2)在node3(node3既作为oozie的server也作为client)上运行 bin/oozie job -config job.properties -run ,即可提交任务,提交任务后会返回一个jobId ,例如:

0000004-160123180442501-oozie-root-W

3) 使用 bin/oozie job -info 0000004-160123180442501-oozie-root-W 即可查看流程状态;

4) 流程结束后,查看流程状态以及在对应的目录即可查看输出结果;

2.2 Pig任务流程

  1. job.properties

[Plain Text] 纯文本查看 复制代码

oozie.wf.application.path=hdfs://node1:8020/user/root/workflow/pig_demo/wf  oozie.use.system.libpath=true #pig流程必须配置此选项  #Hadoop"ResourceManager  resourceManager=node1:8032  #Hadoop"fs.default.name  nameNode=hdfs://node1:8020/  #Hadoop"mapred.queue.name  queueName=default
  1. workflow.xml

[XML] 纯文本查看 复制代码

<workflow-app xmlns="uri:oozie:workflow:0.2" name="whitehouse-workflow">  <start to="transform_job"/>      <action name="transform_job">          <pig>              <job-tracker>${resourceManager}</job-tracker>              <name-node>${nameNode}</name-node>              <prepare>                  <delete path="/user/root/workflow/pig_demo/output"/>              </prepare>              <script>transform_job.pig</script>          </pig>          <ok to="end"/>          <error to="fail"/>      </action>      <kill name="fail">          <message>Job failed, error              message[${wf:errorMessage(wf:lastErrorNode())}]          </message>      </kill>      <end name="end"/>  </workflow-app>

3 . transform_job.pig pig任务用到的脚本

[Plain Text] 纯文本查看 复制代码

    bank_data= LOAD '/user/root/bank.csv' USING PigStorage(';') AS  (age:int, job:chararray, marital:chararray,education:chararray,   default:chararray,balance:int,housing:chararray,loan:chararray,  contact:chararray,day:int,month:chararray,duration:int,campaign:int,  pdays:int,previous:int,poutcom:chararray,y:chararray);  age_gt_30 = FILTER bank_data BY age >= 30;  store age_gt_30 into '/user/root/workflow/pig_demo/output' using PigStorage(',');
  1. 运行

1) 把 transform_job.pig ,workflow.xml 文件拷贝到 hdfs://node1:8020/user/root/workflow/pig_demo/wf/ 目录下面
2) 运行 bin/oozie job -config job.properties -run
3) 运行 bin/oozie job -info jobId 查看对应任务的进度状态,或者在浏览器中的node3:11000 URL中查看所有任务;

2.3 Hive任务流程

注意:hive 任务运行完成后,bank.csv文件会被删除(应该是移动到hive的warehouse目录下),所以进行其他或者再次运行时需要重新上传文件

  1. job.properties

[Plain Text] 纯文本查看 复制代码

nameNode=hdfs://node1:8020  jobTracker=node1:8032  queueName=default  maxAge=30  input=/user/root/bank.csv  output=/user/root/workflow/hive_demo/output  oozie.use.system.libpath=true  oozie.wf.application.path=${nameNode}/user/${user.name}/workflow/hive_demo/wf
  1. workflow.xml

[XML] 纯文本查看 复制代码

<workflow-app xmlns="uri:oozie:workflow:0.2" name="hive-wf">      <start to="hive-node"/>      <action name="hive-node">          <hive xmlns="uri:oozie:hive-action:0.2">              <job-tracker>${jobTracker}</job-tracker>              <name-node>${nameNode}</name-node>              <prepare>                  <delete path="${output}/hive"/>                  <mkdir path="${output}"/>              </prepare>              <configuration>                  <property>                      <name>mapred.job.queue.name</name>                      <value>${queueName}</value>                  </property>              </configuration>              <script>script.hive</script>              <param>INPUT=${input}</param>              <param>OUTPUT=${output}/hive</param>              <param>maxAge=${maxAge}</param>       </hive>          <ok to="end"/>          <error to="fail"/>      </action>      <kill name="fail">          <message>Hive failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>      </kill>      <end name="end"/>  </workflow-app>
  1. hive任务用到的脚本 script.hive

[SQL] 纯文本查看 复制代码

DROP TABLE IF EXISTS bank;  CREATE TABLE bank(      age int,      job string,      marital string,education string,   default string,balance int,housing string,loan string,  contact string,day int,month string,duration int,campaign int,  pdays int,previous int,poutcom string,y string  )    ROW FORMAT DELIMITED FIELDS TERMINATED BY '\073'  STORED AS TEXTFILE;   LOAD DATA INPATH '${INPUT}' INTO TABLE bank;  INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM bank where age > '${maxAge}';

注意:‘\073’ 代表分号;

  1. 运行,参考上面

2.4 Hive 2 任务流程

  1. job.properties

[Plain Text] 纯文本查看 复制代码

nameNode=hdfs://node1:8020  jobTracker=node1:8032  queueName=default  jdbcURL=jdbc:hive2://node4:10000/default # hiveserver2 时,配置此选项  maxAge=30  input=/user/root/bank.csv  output=/user/root/workflow/hive2_demo/output  oozie.use.system.libpath=true  oozie.wf.application.path=${nameNode}/user/${user.name}/workflow/hive2_demo/wf
  1. workflow.xml

[XML] 纯文本查看 复制代码

<workflow-app xmlns="uri:oozie:workflow:0.5" name="hive2-wf">      <start to="hive2-node"/>      <action name="hive2-node">          <hive2 xmlns="uri:oozie:hive2-action:0.1">              <job-tracker>${jobTracker}</job-tracker>              <name-node>${nameNode}</name-node>              <prepare>                  <delete path="${output}/hive"/>                  <mkdir path="${output}"/>              </prepare>              <configuration>                  <property>                      <name>mapred.job.queue.name</name>                      <value>${queueName}</value>                  </property>              </configuration>          <jdbc-url>${jdbcURL}</jdbc-url>              <script>script2.hive</script>              <param>INPUT=${input}</param>              <param>OUTPUT=${output}/hive</param>              <param>maxAge=${maxAge}</param>       </hive2>          <ok to="end"/>          <error to="fail"/>      </action>      <kill name="fail">          <message>Hive2 failed, error message[${wf:errorMessage(wf:lastErrorNode())}]</message>      </kill>      <end name="end"/>  </workflow-app>
  1. hive2用到的脚本: script2.hive

[SQL] 纯文本查看 复制代码

DROP TABLE IF EXISTS bank2;  CREATE TABLE bank2(      age int,      job string,      marital string,education string,   default string,balance int,housing string,loan string,  contact string,day int,month string,duration int,campaign int,  pdays int,previous int,poutcom string,y string  )    ROW FORMAT DELIMITED FIELDS TERMINATED BY '\073'  STORED AS TEXTFILE;   LOAD DATA INPATH '${INPUT}' INTO TABLE bank2;  INSERT OVERWRITE DIRECTORY '${OUTPUT}' SELECT * FROM bank2 where age > '${maxAge}';
  1. 运行,参考上面

2.5 Spark 任务流程

  1. job.properties :

[Plain Text] 纯文本查看 复制代码

nameNode=hdfs://node1:8020  jobTracker=node1:8032  #master=spark://node2:7077   master=spark://node2:6066  sparkMode=cluster  queueName=default  oozie.use.system.libpath=true  input=/user/root/bank.csv  output=/user/root/workflow/spark_demo/output  # the jar file must be local  jarPath=${nameNode}/user/root/workflow/spark_demo/lib/oozie-examples.jar  oozie.wf.application.path=${nameNode}/user/${user.name}/workflow/spark_demo/wf

由于sparkMode采用cluster,所以master的链接需要是下面的6066,:

sparkMode使用client没有试验成功;

  1. workflow.xml

[XML] 纯文本查看 复制代码

<workflow-app xmlns='uri:oozie:workflow:0.5' name='SparkFileCopy'>      <start to='spark-node' />      <action name='spark-node'>          <spark xmlns="uri:oozie:spark-action:0.1">              <job-tracker>${jobTracker}</job-tracker>              <name-node>${nameNode}</name-node>              <prepare>                  <delete path="${output}"/>              </prepare>              <master>${master}</master>          <mode>${sparkMode}</mode>                 <name>Spark-FileCopy</name>       <class>org.apache.oozie.example.SparkFileCopy</class>              <jar>${jarPath}</jar>              <arg>${input}</arg>              <arg>${output}</arg>          </spark>          <ok to="end" />          <error to="fail" />      </action>      <kill name="fail">          <message>Workflow failed, error              message[${wf:errorMessage(wf:lastErrorNode())}]          </message>      </kill>      <end name='end' />  </workflow-app>
  1. 运行:

1) 这里用到的oozie-examples.jar 是在oozie-examples.tar.gz解压后的examples/apps/spark/lib目录下面

2) 上传oozie-examples.jar 到hdfs://node1:8020/user/root/workflow/spark_demo/lib/oozie-examples.jar 目录;上传workflow.xml到hdfs://node1:8020/user/root/workflow/spark_demo/wf/workflow.xml文件;

3) bin/oozie job -config job.properties -run 即可运行;

  1. 相关问题:

1) 这种方式提交任务是通过yarn开启任务,然后提交到spark集群运行的,并不是直接由spark集群运行的,如下图:
首先在8088 界面看到yarn开启的任务:

接着去spark监控界面,同样可以看到监控界面:

但是这样时间就不对了,看日志:

可以看到连接到了yarn的resourcemanager后,直接就连接了spark的master了,然后提交了任务,接着就直接yarn的任务就successed了,然后yarn就返回了;

查看spark的日志,时间也是吻合的:

最后保存文件,关闭driver:

2.6 spark on yarn任务流程

参考官网的提示:

  1. job.properties:

[Plain Text] 纯文本查看 复制代码

    nameNode=hdfs://node1:8020  jobTracker=node1:8032  #master=spark://node2:7077  #master=spark://node2:6066  master=yarn-cluster  #sparkMode=cluster  queueName=default  oozie.use.system.libpath=true  input=/user/root/bank.csv  output=/user/root/workflow/sparkonyarn_demo/output  jarPath=${nameNode}/user/root/workflow/sparkonyarn_demo/lib/oozie-examples.jar  oozie.wf.application.path=${nameNode}/user/${user.name}/workflow/sparkonyarn_demo
  1. workflow.xml:

[XML] 纯文本查看 复制代码

<workflow-app xmlns='uri:oozie:workflow:0.5' name='SparkFileCopy_on_yarn'>      <start to='spark-node' />      <action name='spark-node'>          <spark xmlns="uri:oozie:spark-action:0.1">              <job-tracker>${jobTracker}</job-tracker>              <name-node>${nameNode}</name-node>              <prepare>                  <delete path="${output}"/>              </prepare>              <master>${master}</master>              <name>Spark-FileCopy-on-yarn</name>       <class>org.apache.oozie.example.SparkFileCopy</class>              <jar>${jarPath}</jar>              <spark-opts>--conf spark.yarn.historyServer.address=http://node2:18080 --conf spark.eventLog.dir=hdfs://node1:8020/spark-log --conf spark.eventLog.enabled=true</spark-opts>          <arg>${input}</arg>              <arg>${output}</arg>          </spark>          <ok to="end" />          <error to="fail" />      </action>      <kill name="fail">          <message>Workflow failed, error              message[${wf:errorMessage(wf:lastErrorNode())}]          </message>      </kill>      <end name='end' />  </workflow-app>
  1. 运行;

1)环境准备:拷贝workflow.xml 到hdfs;//node1:8020/user/root/workflow/sparkonyarn_demo/workflow.xml文件

2)拷贝oozie-exmaples.jar 到 hdfs;//node1:8020/user/root/workflow/sparkonyarn_demo/lib/oozie-examples.jar文件

3)拷贝$SPARK_HOME/lib/spark-assembly-1.4.1-hadoop2.6.0.jar文件到hdfs;//node1:8020/user/root/workflow/sparkonyarn_demo/lib/spark-assembly-1.4.1-hadoop2.6.0.jar

4) bin/oozie job -config job.properties -run

5) 查看任务状态:

  1. 相关问题

1) spark 提交和spark on yarn 方式的区别:

spark on yarn也是使用yarn来提交任务,但是没有spark的任务,全部在yarn上运行,看日志的区别:
在8088的区别:

0000003-160123180442501-oozie-root-W任务前后只有一个,并且有一个spark的任务(node2:8080),对照时间
spark on yarn的方式

看到0000009-160123180442501-oozie-root-W 这个任务其实是有两个yarn的任务组成的

查看oozie的日志监控:

所以spark 的方式是yarn启动任务,然后由spark集群运行任务,然后结束;中间需要spark集群启动(也需要yarn集群启动)

而spark on yarn的方式则是yarn启动任务A ,然后在任务中调用另外一个yarn任务B,当任务B完成后,再返回到任务A,最后任务A结束。中间不需要spark集群启动(这个看下图就知道了)

原文链接:http://blog.csdn.net/fansy1990/article/details/50570518

原创粉丝点击