Spark学习四:网站日志分析案例

来源:互联网 发布:陈田村拆车件淘宝店 编辑:程序博客网 时间:2024/04/30 15:02

Spark学习四:网站日志分析案例

标签(空格分隔): Spark


  • Spark学习四网站日志分析案例
    • 一创建maven工程
    • 二创建模板
    • 三日志分析案例

一,创建maven工程

1,执行maven命令创建工程

mvn archetype:generate -DarchetypeGroupId=org.scala-tools.archetypes -DarchetypeArtifactId=scala-archetype-simple -DremoteRepositories=http://scala-tools.org/repo-releases -DgroupId=com.ibeifeng.bigdata.spark.app -DartifactId=analyzer-logs -Dversion=1.0

2,idea导入maven工程
001.PNG-23.3kB

002.PNG-27.5kB

003.PNG-21.4kB

004.PNG-42.7kB

二,创建模板

001.PNG-82.9kB

#if ((${PACKAGE_NAME} && ${PACKAGE_NAME} != ""))package ${PACKAGE_NAME} #endimport org.apache.spark.SparkContextimport org.apache.spark.SparkConf#parse("File Header.java")object ${NAME} {  def main(args:Array[String]){    // create SparkConf    val sparkConf=new SparkConf().setAppName("Test").setMaster("local[2]")    //create sc    val sc=new SparkContext(sparkConf)    sc.stop()  }}

三,日志分析案例

1,准备数据
2,完成代码
001.PNG-15.9kB

ApacheAccessLog.scala

package com.ibeifeng.bigdata.spark.app/** * Created by hadoop001 on 4/27/16. */case class ApacheAccessLog(    ipAddress: String ,    clientIdentd: String ,    userId: String ,    dataTime: String ,    method: String ,    endPoint: String ,    protocol: String ,    responseCode: Int ,    contentSize: Long    ) {}object ApacheAccessLog{  // regex  // 64.242.88.10 - - [   07/Mar/2004:16:05:49 -0800       ]  // "GET /twiki/bin/edit/Main/Double_bounce_sender?topicparent=Main.ConfigurationVariables HTTP/1.1"  // 401 12846  val PARTTERN = """^(\S+) (\S+) (\S+) \[([\w:/]+\s[+|-]\d{4})\] "(\S+) (\S+) (\S+)" (\d{3}) (\d+)""".r  def isValidatelogLine(log:String):Boolean={    val res=PARTTERN.findFirstMatchIn(log)    if(res.isEmpty){      false    }else{      true    }  }  def parseLogLine(log:String):ApacheAccessLog={    val res=PARTTERN.findFirstMatchIn(log)    if(res.isEmpty){      throw new RuntimeException("Cannot parse log line: " + log)    }    val m=res.get    ApacheAccessLog(      m.group(1) ,      m.group(2) ,      m.group(3) ,      m.group(4) ,      m.group(5) ,      m.group(6) ,      m.group(7) ,      m.group(8).toInt ,      m.group(9).toLong    )  }}

OrderingUtils.scala

package com.ibeifeng.bigdata.spark.app/** * Created by hadoop001 on 4/28/16. */object OrderingUtils {  object SecondValueOrdering extends Ordering[(String, Int)]{    /** Returns an integer whose sign communicates how x compares to y.      *      * The result sign has the following meaning:      *      *  - negative if x < y      *  - positive if x > y      *  - zero otherwise (if x == y)      */    def compare(x: (String, Int), y: (String, Int)): Int ={      x._2.compare(y._2)    }  }}

3,maven进行打包

mvn clean package

002.PNG-9.8kB

4,提交应用

//本地模式bin/spark-submit --class com.ibeifeng.bigdata.spark.app.LogAnalyzer analyzer-logs-1.0.jar

001.PNG-22.8kB

//集群模式 需要启动master和worknodebin/spark-submit \--class com.ibeifeng.bigdata.spark.app.LogAnalyzer \--deploy-mode cluster \analyzer-logs-1.0.jar \spark://spark.com.cn:7077

002.PNG-25.3kB

四,spark launch on yarn
003.PNG-323.2kB
1) 编译Spark时,指定选择-Pyarn
2) 在提交应用时,指定HADOOP_CONF,读取配置信息
–master yarn

第一种运行模式:yarn-client

/opt/modules/spark-1.3.0-bin-2.5.0/bin/spark-submit \--class com.ibeifeng.bigdata.spark.app.LogAnalyzer \analyzer-logs-1.0.jar \yarn-client

第二种运行模式:yarn-cluster
004.PNG-32.7kB

005.PNG-342.6kB

/opt/modules/spark-1.3.0-bin-2.5.0/bin/spark-submit \--master yarn-cluster \--class com.ibeifeng.bigdata.spark.app.LogAnalyzer \analyzer-logs-1.0.jar
0 0
原创粉丝点击