Storm之——组合多种流操作

来源:互联网 发布:菜鸟网络上市 编辑:程序博客网 时间:2024/06/05 22:36

转载请注明出处:http://blog.csdn.net/l1028386804/article/details/78447264

Storm支持流聚合操作,将多个组件emit的数据,汇聚到同一个处理组件来统一处理,可以实现对多个Spout组件通过流聚合到一个Bolt组件(Sout到Bolt的多对一、多对多操作),也可以实现对多个Bolt通过流聚合到另一个Bolt组件(Bolt到Bolt的多对一、多对多操作)。实际,这里面有两种主要的操作,一种是类似工作流中的fork,另一种是类似工作流中的join。下面,我们实现一个例子来演示如何使用,实时流处理逻辑如下图所示:


上图所描述的实时流处理流程,我们期望能够按照如下流程进行处理:

  • 存在3类数据:数字字符串(NUM)、字母字符串(STR)、特殊符号字符串(SIG)
  • 每个ProduceRecordSpout负责处理上面提到的3类数据
  • 所有数据都是字符串,字符串中含有空格,3种类型的ProduceRecordSpout所emit的数据都需要被相同的逻辑处理:根据空格来拆分字符串
  • 一个用来分发单词的组件DistributeWordByTypeBolt能够接收到所有的单词(包含类型信息),统一将每类单词分别分发到指定的一个用来存储数据的组件
  • SaveDataBolt用来存储处理过的单词,对于不同类型单词具有不同的存储逻辑,需要设置3类SaveDataBolt

将Spout分为3类,每一个Spout发射不同类型的字符串,这里定义了一个Type常量类来区分这三种类型:

package com.lyz.storm.batch.type;/** * 定义类型 * @author liuyazhuang * */public interface Type {     String NUMBER = "NUMBER";     String STRING = "STRING";     String SIGN = "SIGN";}
首先看一下,我们实现的Topology是如何进行创建的,代码如下所示:
package com.lyz.storm.batch.main;import com.lyz.storm.batch.bolt.DistributeWordByTypeBolt;import com.lyz.storm.batch.bolt.SaveDataBolt;import com.lyz.storm.batch.bolt.SplitRecordBolt;import com.lyz.storm.batch.spout.ProduceRecordSpout;import com.lyz.storm.batch.type.Type;import backtype.storm.Config;import backtype.storm.LocalCluster;import backtype.storm.StormSubmitter;import backtype.storm.generated.AlreadyAliveException;import backtype.storm.generated.InvalidTopologyException;import backtype.storm.topology.TopologyBuilder;import backtype.storm.tuple.Fields;/** * 运行程序 * @author liuyazhuang * */public class MultiStreamsWordDistributionTopology {public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException, InterruptedException {     // configure & build topology     TopologyBuilder builder = new TopologyBuilder();         // configure 3 spouts     builder.setSpout("spout-number", new ProduceRecordSpout(Type.NUMBER, new String[] {"111 222 333", "80966 31"}), 1);     builder.setSpout("spout-string", new ProduceRecordSpout(Type.STRING, new String[] {"abc ddd fasko", "hello the word"}), 1);     builder.setSpout("spout-sign", new ProduceRecordSpout(Type.SIGN, new String[] {"++ -*% *** @@", "{+-} ^#######"}), 1);         // configure splitter bolt     builder.setBolt("bolt-splitter", new SplitRecordBolt(), 2)          .shuffleGrouping("spout-number")          .shuffleGrouping("spout-string")          .shuffleGrouping("spout-sign");         // configure distributor bolt     builder.setBolt("bolt-distributor", new DistributeWordByTypeBolt(), 6)          .fieldsGrouping("bolt-splitter", new Fields("type"));         // configure 3 saver bolts     builder.setBolt("bolt-number-saver", new SaveDataBolt(Type.NUMBER), 3)          .shuffleGrouping("bolt-distributor", "stream-number-saver");     builder.setBolt("bolt-string-saver", new SaveDataBolt(Type.STRING), 3)          .shuffleGrouping("bolt-distributor", "stream-string-saver");     builder.setBolt("bolt-sign-saver", new SaveDataBolt(Type.SIGN), 3)          .shuffleGrouping("bolt-distributor", "stream-sign-saver");         // submit topology     Config conf = new Config();     String name = MultiStreamsWordDistributionTopology.class.getSimpleName();     if (args != null && args.length > 0) {          String nimbus = args[0];          conf.put(Config.NIMBUS_HOST, nimbus);          conf.setNumWorkers(3);          StormSubmitter.submitTopologyWithProgressBar(name, conf, builder.createTopology());     } else {          LocalCluster cluster = new LocalCluster();          cluster.submitTopology(name, conf, builder.createTopology());          Thread.sleep(60 * 60 * 1000);          cluster.shutdown();     }}}

一个SplitRecordBolt组件从3个不同类型的ProduceRecordSpout接收数据,这是一个多Spout流聚合。SplitRecordBolt将处理后的数据发送给DistributeWordByTypeBolt组件,然后根据收到的数据的类型进行一个分发处理,这里用了fieldsGrouping操作,也就是SplitRecordBolt发送的数据会按照类型发送到不同的DistributeWordByTypeBolt任务(Task),每个Task收到的一定是同一个类型的数据,如果直接使用shuffleGrouping操作也没有问题,只不过每个Task可能收到任何类型的数据,在DistributeWordByTypeBolt内部进行流向控制。DistributeWordByTypeBolt组件中定义了多个stream,根据类型来分组发送给不同类型的SaveDataBolt组件。
下面看每个组件的实现:

  • ProduceRecordSpout组件

通过我们定义的一个ProduceRecordSpout类,可以创建3个不同的ProduceRecordSpout实例,每个实例负责生产特定类型的数据,实现代码如下所示:

package com.lyz.storm.batch.spout;import java.util.List;import java.util.Map;import java.util.Random;import org.apache.commons.logging.Log;import org.apache.commons.logging.LogFactory;import backtype.storm.spout.SpoutOutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.topology.base.BaseRichSpout;import backtype.storm.tuple.Fields;import backtype.storm.tuple.Values;import backtype.storm.utils.Utils;/** * 产生数据源,并不断发送数据 * @author liuyazhuang * */public class ProduceRecordSpout extends BaseRichSpout {      private static final long serialVersionUID = 1L;      private static final Log LOG = LogFactory.getLog(ProduceRecordSpout.class);      private SpoutOutputCollector collector;      private Random rand;      private String[] recordLines;      private String type;           public ProduceRecordSpout(String type, String[] lines) {           this.type = type;           recordLines = lines;      }           @Override      public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {           this.collector = collector;               rand = new Random();      }      @Override      public void nextTuple() {           Utils.sleep(500);           String record = recordLines[rand.nextInt(recordLines.length)];           List<Object> values = new Values(type, record);           collector.emit(values, values);           LOG.info("Record emitted: type=" + type + ", record=" + record);      }      @Override      public void declareOutputFields(OutputFieldsDeclarer declarer) {           declarer.declare(new Fields("type", "record"));               } }

这比较简单,根据传递的参数来创建上图中的3个Spout实例。

  • SplitRecordBolt组件

由于前面3个ProduceRecordSpout产生的数据,在开始时的处理逻辑是相同的,所以可以将3个ProduceRecordSpout聚合到一个包含通用逻辑的SplitRecordBolt组件,实现如下所示:

package com.lyz.storm.batch.bolt;import java.util.Map;import org.apache.commons.logging.Log;import org.apache.commons.logging.LogFactory;import backtype.storm.task.OutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.topology.base.BaseRichBolt;import backtype.storm.tuple.Fields;import backtype.storm.tuple.Tuple;import backtype.storm.tuple.Values;/** * 通用逻辑的SplitRecordBolt组件,按照空格拆分单词 * @author liuyazhuang * */public class SplitRecordBolt extends BaseRichBolt {     private static final long serialVersionUID = 1L;     private static final Log LOG = LogFactory.getLog(SplitRecordBolt.class);     private OutputCollector collector;         @Override     public void prepare(Map stormConf, TopologyContext context,               OutputCollector collector) {          this.collector = collector;         }     @Override     public void execute(Tuple input) {          String type = input.getString(0);          String line = input.getString(1);          if(line != null && !line.trim().isEmpty()) {               for(String word  : line.split("\\s+")) {                    collector.emit(input, new Values(type, word));                    LOG.info("Word emitted: type=" + type + ", word=" + word);                    // ack tuple                    collector.ack(input);               }          }     }     @Override     public void declareOutputFields(OutputFieldsDeclarer declarer) {          declarer.declare(new Fields("type", "word"));     }}

无论接收到的Tuple是什么类型(STRING、NUMBER、SIGN)的数据,都进行split,然后在emit的时候,仍然将类型信息传递给下一个Bolt组件。

  • DistributeWordByTypeBolt组件

DistributeWordByTypeBolt组件只是用来分发Tuple,通过定义Stream,将接收到的Tuple发送到指定的下游Bolt组件进行处理。通过SplitRecordBolt组件emit的Tuple包含了类型信息,所以在DistributeWordByTypeBolt中根据类型来进行分发,代码实现如下:

package com.lyz.storm.batch.bolt;import java.util.Map;import org.apache.commons.logging.Log;import org.apache.commons.logging.LogFactory;import com.lyz.storm.batch.type.Type;import backtype.storm.generated.GlobalStreamId;import backtype.storm.generated.Grouping;import backtype.storm.task.OutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.topology.base.BaseRichBolt;import backtype.storm.tuple.Fields;import backtype.storm.tuple.Tuple;import backtype.storm.tuple.Values;/** * DistributeWordByTypeBolt组件只是用来分发Tuple,通过定义Stream,将接收到的Tuple发送到指定的下游Bolt组件进行处理。 * 通过SplitRecordBolt组件emit的Tuple包含了类型信息,所以在DistributeWordByTypeBolt中根据类型来进行分发 * @author liuyazhuang * */public class DistributeWordByTypeBolt extends BaseRichBolt {     private static final long serialVersionUID = 1L;     private static final Log LOG = LogFactory.getLog(DistributeWordByTypeBolt.class);     private OutputCollector collector;         @Override     public void prepare(Map stormConf, TopologyContext context,               OutputCollector collector) {          this.collector = collector;              Map<GlobalStreamId, Grouping> sources = context.getThisSources();          LOG.info("sources==> " + sources);     }     @Override     public void execute(Tuple input) {          String type = input.getString(0);          String word = input.getString(1);          switch(type) {               case Type.NUMBER:                    emit("stream-number-saver", type, input, word);                    break;               case Type.STRING:                    emit("stream-string-saver", type, input, word);                    break;               case Type.SIGN:                    emit("stream-sign-saver", type, input, word);                    break;               default:                    // if unknown type, record is discarded.                    // as needed, you can define a bolt to subscribe the stream 'stream-discarder'.                    emit("stream-discarder", type, input, word);          }          // ack tuple          collector.ack(input);     }         private void emit(String streamId, String type, Tuple input, String word) {          collector.emit(streamId, input, new Values(type, word));          LOG.info("Distribution, typed word emitted: type=" + type + ", word=" + word);     }     @Override     public void declareOutputFields(OutputFieldsDeclarer declarer) {          declarer.declareStream("stream-number-saver", new Fields("type", "word"));          declarer.declareStream("stream-string-saver", new Fields("type", "word"));          declarer.declareStream("stream-sign-saver", new Fields("type", "word"));          declarer.declareStream("stream-discarder", new Fields("type", "word"));     }}

实际上,下游的3个Bolt组件(SaveDataBolt)在订阅该流组件(DistributeWordByTypeBolt)的时候,方式相同,只是分发的逻辑交由DistributeWordByTypeBolt来统一控制。
我们在配置该Bolt组件时,使用了fieldsGrouping分组方式,实际每个DistributeWordByTypeBolt只会收到同一种类型的Tuple,这里也可以使用shuffleGrouping分组方式,这种分组方式会有不同类型的Tuple被emit到同一个DistributeWordByTypeBolt组件上。
另外,该Bolt组件中我们还定义了一个名称为stream-discarder的stream,在Topology中并没有满足该stream的条件,可以根据实际情况选择是否实现它。

  • SaveDataBolt组件

最后这个Bolt用来模拟保存处理过的数据内容,代码如下:

package com.lyz.storm.batch.bolt;import java.util.Map;import org.apache.commons.logging.Log;import org.apache.commons.logging.LogFactory;import backtype.storm.task.OutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.topology.base.BaseRichBolt;import backtype.storm.tuple.Tuple;/** * 用来模拟保存处理过的数据内容 * 在实际应用中,你可能需要将处理过的数据保存到数据库中,就可以在该Bolt中实现存储数据的逻辑 * @author liuyazhuang * */public class SaveDataBolt extends BaseRichBolt {     private static final long serialVersionUID = 1L;     private static final Log LOG = LogFactory.getLog(SaveDataBolt.class);     private OutputCollector collector;         private String type;         public SaveDataBolt(String type) {          this.type = type;     }         @Override     public void prepare(Map stormConf, TopologyContext context,               OutputCollector collector) {          this.collector = collector;         }     @Override     public void execute(Tuple input) {          // just print the received tuple for being waited to persist          LOG.info("[" + type + "] " +                    "SourceComponent=" + input.getSourceComponent() +                    ", SourceStreamId=" + input.getSourceStreamId() +                    ", type=" + input.getString(0) +                    ", value=" + input.getString(1));     }     @Override     public void declareOutputFields(OutputFieldsDeclarer declarer) {          // do nothing              }}

在实际应用中,你可能需要将处理过的数据保存到数据库中,就可以在该Bolt中实现存储数据的逻辑。

总结

Storm中最核心的计算组件的抽象就是Spout、Bolt,以及Stream Grouping,其它高级的功能,像Trident、DRPC,他们或者基于这些基础组件以及Streaming Grouping分发策略来实现的,屏蔽了底层的分发计算处理逻辑以更高层的编程抽象面向开发者,减轻了开发人员对底层复杂机制的处理;或者是为了方便使用Storm计算服务而增加的计算机制衍生物,如批量事务处理、RPC等。

参考链接

  • http://storm.apache.org/documentation/Documentation.html
  • http://storm.apache.org/documentation/Concepts.html
  • http://storm.apache.org/documentation/Tutorial.html
  • http://storm.apache.org/documentation/Understanding-the-parallelism-of-a-Storm-topology.html

原创粉丝点击