使用storm处理消息队列中的日志信息遇见的错误

来源:互联网 发布:腰部赘肉 知乎 编辑:程序博客网 时间:2024/06/10 08:03

      对于系统设计者而言,日志信息具有极大的分析利用价值,这里以nginx日志access.log为例,分析它,你可以得到自己网站哪个页面访问量最多,你还可以统计访问者ip最频繁的地址以便判断是否恶意访问,总之有用信息很多。这里我才用文件读入的方法将日志信息一条条读出来放进activemq消息队列中,然后storm分析处理日志信息:

     前提是linux上已安装好activemq并且启动,storm我采用storm-0.9.3版本,部署的是storm单机版,zookeeper也是一个节点,想看错误解决方法请直接翻到文章最后

     第一步:放进消息队列示例:

import org.apache.activemq.ActiveMQConnectionFactory;import javax.jms.*;import java.io.BufferedReader;import java.io.FileReader;import java.io.IOException;public class ActiveMQ {        public static void main(String[] args) throws IOException {        ConnectionFactory connectionFactory=new ActiveMQConnectionFactory(                ActiveMQConnectionFactory.DEFAULT_USER,                ActiveMQConnectionFactory.DEFAULT_PASSWORD,                "tcp://192.168.84.23:61616"        );        try {            Connection connection=connectionFactory.createConnection();            connection.start();            Session session=connection.createSession(Boolean.TRUE,Session.AUTO_ACKNOWLEDGE);            Destination destination=session.createQueue("LogQueue");            MessageProducer producer=session.createProducer(destination);            producer.setDeliveryMode(DeliveryMode.NON_PERSISTENT);            FileReader fr=new FileReader("C:\\Users\\administrator\\Desktop\\abc.txt");            BufferedReader br=new BufferedReader(fr);            String line="";            while ((line=br.readLine())!=null) {                TextMessage textMessage=session.createTextMessage(line);                producer.send(textMessage);            }            br.close();            fr.close();            session.commit();        } catch (JMSException e) {            e.printStackTrace();        }           }}

    第二步:第一步已将日志相关的信息发送到ActiveMQ,接下里便是从activemq中读取日志,通过计算分析,然后将结果输出,这里将通过日志分析对请求的url地址做pv统计

自定义storm处理类

(1)Topology通过spout将数据流输入,此处spout将从activemq中按行读取日志相关数据,读入以后,将数据发射给下游的bolt。spout需要实现IRichSpout接口,并实现该接口的一系列方法:

import backtype.storm.spout.SpoutOutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.IRichSpout;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.tuple.Fields;import backtype.storm.tuple.Values;import org.apache.activemq.ActiveMQConnectionFactory;import javax.jms.*;import java.util.Map;public class LogReader implements IRichSpout {    private  static final long serialVersionUID=1L;    private TopologyContext context;    private SpoutOutputCollector collector;    private ConnectionFactory connectionFactory;    private Connection connection;    private Session session;    private Destination destination;    private MessageConsumer consumer;    public void open(Map map, TopologyContext topologyContext, SpoutOutputCollector spoutOutputCollector) {        this.context=topologyContext;        this.collector=spoutOutputCollector;        this.connectionFactory=new ActiveMQConnectionFactory(                ActiveMQConnectionFactory.DEFAULT_USER,                ActiveMQConnectionFactory.DEFAULT_PASSWORD,                "tcp://192.168.84.23:61616");        try {            connection=connectionFactory.createConnection();            connection.start();            session=connection.createSession(Boolean.FALSE,Session.AUTO_ACKNOWLEDGE);            destination=session.createQueue("LogQueue");            consumer=session.createConsumer(destination);        }catch (Exception e){            e.printStackTrace();        }    }    public void close() {    }    public void activate() {    }    public void deactivate() {    }    public void nextTuple() {        try {            TextMessage message= (TextMessage) consumer.receive(100000);            this.collector.emit(new Values(message.getText()));        }catch (Exception e){        }    }    public void ack(Object o) {    }    public void fail(Object o) {    }    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {        outputFieldsDeclarer.declare(new Fields("logline"));    }    public Map<String, Object> getComponentConfiguration() {        return null;    }}

(2)接下来再定义一个bolt,接收spout所发射的名称为logline的tuple,并对其进行解析:

import backtype.storm.task.OutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.IRichBolt;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.tuple.Fields;import backtype.storm.tuple.Tuple;import backtype.storm.tuple.Values;import java.util.Map;public class LogAnalysis implements IRichBolt {    private  static final long serialVersionUID=1L;    private OutputCollector collector;    public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {        this.collector=outputCollector;    }    public void execute(Tuple tuple) {        String logLine=tuple.getString(0);        String[] input_fields=logLine.toString().split(" ");        collector.emit(new Values(input_fields[3]));//emit request url    }    public void cleanup() {    }    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {        outputFieldsDeclarer.declare(new Fields("page"));    }    public Map<String, Object> getComponentConfiguration() {        return null;    }}

(3)接下来,下游的bolt接收到LogAnalysis所发射的page字段,将会对每个页面访问的次数进行统计,还可以将次数写入到相应的存储系统中:

import backtype.storm.task.OutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.IRichBolt;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.tuple.Tuple;import java.util.Map;public class PageViewCounter implements IRichBolt{    private  static final long serialVersionUID=1L;    @Override    public void prepare(Map map, TopologyContext topologyContext, OutputCollector outputCollector) {    }    @Override    public void execute(Tuple tuple) {        //PV进行统计,持久化存储        System.out.println(tuple.getValue(0));    }    @Override    public void cleanup() {    }    @Override    public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer) {    }    @Override    public Map<String, Object> getComponentConfiguration() {        return null;    }}

(4)spout和bolt编写好以后,接下里就需要将他们提交到storm集群中执行:

import backtype.storm.Config;import backtype.storm.LocalCluster;import backtype.storm.topology.TopologyBuilder;public class MainJob {    public static void main(String[] args) {        TopologyBuilder builder=new TopologyBuilder();        builder.setSpout("log-reader",new LogReader());        builder.setBolt("log-analysis",new LogAnalysis()).shuffleGrouping("log-reader");        builder.setBolt("pageview-counter",new PageViewCounter(),2).shuffleGrouping("log-analysis");        Config config=new Config();        config.setDebug(false);        config.put(Config.TOPOLOGY_MAX_SPOUT_PENDING,1);        LocalCluster cluster=new LocalCluster();        cluster.submitTopology("log-process-toplogie",config,builder.createTopology());    }}

        第三步:以上步骤完成后,将LogReader.java、LogAnalysis.java、PageViewCounter.java、MainJob.java这四个类打成jar包,推荐eclipse中直接就可以导出为jar,然后要确保storm处于后台运行中,想要在在后台启动就可以在启动时附加参数:

#./storm nimbus  > /dev/null 2>&1 &

#./storm supervisor > /dev/null 2>&1 &

#./storm ui  > /dev/null 2>&1 &

       另外storm的ui即web管理界面默认端口为8080,要确保不冲突,没有被占用

       然后在控制台输入# ./storm jar /home/lvyuan/temp/logprocess.jar  MainJob      我打成的jar放在/home/lvyuan/temp这个目录下,主类就是MainJob   ,执行完后就会看到控制台输出日志文件中的请求url

      我这里往消息队列中放的abc.txt内容如下所示,是我用程序模拟nginx日志批量生成的,和真实的有一点区别,如果你只是做测试,又没有现成的日志信息,就那这个程序生成一个吧,程序demo在我的资源里

125.119.222.39 599 GET www.xxx.com/publish.htm www.taobao.com 301 490
124.109.222.29 21 POST www.xxx.com/publish.htm www.google.com 500 490
125.119.222.39 67 GET www.xxx.com/userinfo.htm www.taobao.com 301 490
125.19.22.29 43 GET www.xxx.com/index.htm www.sina.com 404 80834
125.119.222.39 21 POST www.xxx.com/index.htm www.taobao.com 404 439274
125.119.222.39 760 GET www.xxx.com/index.htm www.google.com 500 432943
126.119.222.29 43 POST www.xxx.com/index.htm www.qq.com 404 48243
174.119.232.29 599 GET www.xxx.com/list.htm www.taobao.com 200 432943
126.119.222.29 760 POST www.xxx.com/list.htm www.sina.com 500 439274
126.119.222.29 21 POST www.xxx.com/index.htm www.google.com 404 490
126.119.222.29 230 GET www.xxx.com/publish.htm www.taobao.com 302 80834
124.119.202.29 12 GET www.xxx.com/detail.htm www.sina.com 500 48243
124.119.202.29 12 POST www.xxx.com/index.htm www.google.com 404 432004

接下来就是任务执行过程中遇到的错误:

(1)因为要从activemq中读取日志,所以要提前把activemq导入相关jar放到storm的lib目录下,activemq-core-5.7.0.jar和javax.jms.jar这两个必须放

(2)程序没有报错,但是activemq的web管理界面页显示消息已被消费,但是控制台没有输出日志文件中的请求url,排查原因后发现刚开始我往消息队列中放消息时用的是

ObjectMessage  message=session.createObjectMessage(),但是我在storm自定义处理类中用的是TextMessage来接收,类型不匹配,难怪不输出也不报错。后来改回来

发送的时候用这个类TextMessage  message=session.createTextMessage就对了,类型一定要一致

(3)运行报错Caused by: java.lang.ClassNotFoundException: javax.management.j2ee.statistics.Stats

 解决方法:将management-api-1.1-rev-1.jar放到storm的lib目录下,也可能jar包名为javax.managent.j2ee-api-1.1.jar,反正我用的是前面那个名字,在maven中央仓库就可以找到,坐标为

<dependency>
    <groupId>javax.management.j2ee</groupId>
    <artifactId>management-api</artifactId>
    <version>1.1-rev-1</version>
</dependency>


storm自定义处理类用的是《大型分布式网站架构设计与实践》最后一章数据分析storm的例子

原创粉丝点击