storm 课件

来源:互联网 发布:十字军之王2优化补丁 编辑:程序博客网 时间:2024/04/29 07:37

1      并行度

一、说明

Ø  Topology都是在从节点(supervisor)运行的

Ø  Supervisor里面运行的是spout和bolt

Ø  Task是storm中进行计算的最小单元,task是spout或bolt的一个实例

Ø  Task是在进程里运行的,将在supervisor中运行的进程叫做worker,在supervisor中可以运行多个worker

Ø  一个进程(worker)里可以包含多个线程(executor)

 

二、worker,executor,task解释

1.        1个worker进程执行的是1个topology的子集(注:不会出现1worker为多个topology服务)。1个worker进程会启动1个或多个executor线程来执行1个topology的component(spout或bolt)(。因此,1个运行中的topology就是由集群中多台物理机上的多个worker进程组成的。

2.        executor是1个被worker进程启动的单独线程。每个executor只会运行1个topology的1个component(spout或bolt)的task(注:task可以是1个或多个,storm默认是1component只生成1taskexecutor线程会在每次循环里顺序调用所有task实例(注:一个线程一里只同时出现同一个soput和bolt的一个或多个实例))。

3.        task是最终运行spout或bolt中代码的执行单元(注:1task即为spoutbolt1个实例,executor线程在执行期间会调用该tasknextTupleexecute方法)。topology启动后,1个component(spout或bolt)的task数目是固定不变的,但该component使用的executor线程数可以动态调整(例如:1executor线程可以执行该component1个或多个task实例)。这意味着,对于1个component存在这样的条件:#threads<=#tasks(即:线程数小于等于task数目)。默认情况下task的数目等于executor线程数目,即1个executor线程只运行1个task。

4.        默认情况下,一个supervisor节点最多可以启动4个worker进程。一个topology默认使用一个work进程,每个worker进程会启动1个executor,每个executor启动1个task。

4.

三、提高并行度

1.        worker(slots)

Ø  默认一个从节点上面最多可以启动4个worker进程,参数是supervisor.slots.port。在storm配置文件中已经配置过了,默认是在strom-core.jar包中的defaults.yaml中配置的有。

该个数和cpu的个数有关,一般cpu有几核就可设置几个

Ø  默认一个strom项目只使用一个worker进程,可以通过代码来设置使用多个worker进程。

Ø  通过config.setNumWorkers(workers)设置

修改前

修改后

 

Ø  最好一台机器上的一个topology只使用一个worker,主要原因是减少了worker之间的数据传输

Ø  如果worker使用完的话再提交topology就不会执行,会处于等待状态

Ø

2.        executor

Ø  默认情况下一个executor运行一个task,可以通过在代码中设置

Ø  builder.setSpout(id, spout, parallelism_hint);

Ø  builder.setBolt(id, bolt, parallelism_hint);

Ø

1 修改前

(注:默认一个worker内有3个executor,分别是程序内的一个spout、一个bolt、一个acker(消息确认机制,会对消息进行确认))

2 修改后

(注:一般不会将acker设为0, 没有acker运行会有问题, 默认每个worker内有会有一个acker)

3 修改bolt_id使用两个线程运行

Ø  通过conf.setNumAckers(0);可以取消acker任务

 

 

3.        task

Ø  通过boltDeclarer.setNumTasks(num);来设置实例的个数

修改前

(四个线程,每个线程一个task)

修改后

(3个线程分别是spout、bolt、acker,其中bolt有两个task,因此tasks数为4)

 

executor的数量会小于等于task的数量(为了rebalance)

2      弹性计算(通过rebalance动态设置数量)

stormrebalance mytopology -w 10 -n 5 -e blue-spout=3 -e yellow-bolt=10

mytopology: 是topology的名称

blue-spout: spout的id

yellow-bolt:bolt的id

-w :等待时间,10秒后动态调整,可以不加-w

-n :提定进程的数量,此处是5个进程,所指定进程个数必须是worker范围之内空闲的数量

-e :指定spout和bolt进程的数量

(注1: -n 和 -e可以分开调整也可以单独调整)

(注2: 设置rebalance的前题是,代码中必须已有多个实现boltDeclarer.setNumTasks(num),设置多个线程(线程个数小于等于实例个数)才能生效)

3      work进程内部的消息通信

work调优选项(修改队列大小)

conf.put(Config.TOPOLOGY_RECEIVER_BUFFER_SIZE,8);

conf.put(Config.TOPOLOGY_TRANSFER_BUFFER_SIZE,32);

conf.put(Config.TOPOLOGY_EXECUTOR_RECEIVE_BUFFER_SIZE,16384);

conf.put(Config.TOPOLOGY_EXECUTOR_SEND_BUFFER_SIZE,16384);

4      storm的可靠性

Ø  worker进程死掉

1)        当杀死一个worker进程时会在另外的supervisor重新启动该worker进程

2)        在worker进程死掉的期间所有发送来的消息会丢失

Ø  supervisor进程死掉

1)        supervisor进程死掉了,不会影响之前在supervisor节点上启动的worker进程,

2)        在supervisor进程死掉了,再启动的topology,不会在该supervisor上运行了

3)        如果所有supervisor都死掉了,则topology可以提交,但不会执行,因为没有可用资源

Ø  nimbus进程死掉(存在HA的问题)

1)        nimbus死掉不会影响之前提交的topology的执行

2)        nimbus死掉后续就不能在集群中提交topology了

Ø  服务器(节点)宕机

1)        不会影响之前运行的worker进程

2)        类似nimbus或supervisor死掉

Ø  ack/fail消息确认机制

1)        如果spout发送的数据已经被bolt接收了,则不会再重复发送

2)        如果spout发送的数据,没收到正常的回复(bolt未能接收),则spout还会重复发送

3)        默认在每个worker进程内都会启ack线程,进行消息确认

 

ack/fail 示例

package com.fhx.hadoopDeploy.storm.acker;

 

import java.util.Map;

 

import org.apache.log4j.Logger;

 

import backtype.storm.Config;

import backtype.storm.LocalCluster;

import backtype.storm.StormSubmitter;

import backtype.storm.spout.SpoutOutputCollector;

import backtype.storm.task.OutputCollector;

import backtype.storm.task.TopologyContext;

import backtype.storm.topology.OutputFieldsDeclarer;

import backtype.storm.topology.TopologyBuilder;

import backtype.storm.topology.base.BaseRichBolt;

import backtype.storm.topology.base.BaseRichSpout;

import backtype.storm.tuple.Fields;

import backtype.storm.tuple.Tuple;

import backtype.storm.tuple.Values;

 

/**

 * 数据累加

 * @author Administrator

 *

 */

public class ClusterStormTopologyAcker {

         private static Logger logger = Logger.getLogger(ClusterStormTopologyAcker.class);

        

         /**

          * spout

          * @author Administrator

          *

          */

         public static class DataSourceSpout extends BaseRichSpout{

 

                   private Map conf;

                   private TopologyContext context;

                   private SpoutOutputCollector collector;

                  

                   /**

                    * 初始化方法,本实例运行时被调用一次,只能执行一次

                    * @param conf 配置参数,可获得Topology的配置参数

                    * @param context 上下文(一般用不到)

                    * @param collector 发射器(向下一步发射数据,spout生成的数据让bolt接收到)

                    */

                   public void open(Map conf, TopologyContext context,    SpoutOutputCollector collector) {

                            this.conf = conf;

                            this.context = context;

                            this.collector = collector;

                   }

                  

                   int i=0;

                   /**

                    * 死循环的调用,心跳,经过这个方法产生数据,并发射出去

                    */

                   public void nextTuple() {

                            logger.info("spout="+i);

                            this.collector.emit(new Values(i++),"id"+(i-1)); //第二个参数是msgId,是发射出去tuplue的主键,ack和fail方法内的msgId参数就是该值

                            try {

                                     Thread.sleep(1000);

                            } catch (InterruptedException e) {

                                     e.printStackTrace();

                            }

                   }

 

                   /**

                    * 声明输出的内容(让后面的bolt知道spout输出的是什么内容)

                    */

                   public void declareOutputFields(OutputFieldsDeclarer declarer) {

                            declarer.declare(new Fields("num")); //输出Fields内的个数需与发射出去的字段个数一致(通过字段名来获取字段的内容)

                   }

 

                   @Override

                   public void ack(Object msgId) {//可记录处理成功的信息,可单独写逻辑处理

                            logger.info("调用了ack方法:"+msgId);

                   }

 

                   @Override

                   public void fail(Object msgId) {//可记录处理失败的信息,可单独写逻辑处理

//                         this.collector.emit(new Values(i++),"id"+(i-1)); //如果处理失败可以一直重新发送

                            logger.info("调用了fail方法:"+msgId); //msgId是每条数据的一个主键

                   }

         }

        

         /**

          * bolt

          * @author Administrator

          */

         public static class SumBolt extends BaseRichBolt{

 

                   private Map stormConf;

                   private TopologyContext context;

                   private OutputCollector collector;

                  

                   /**

                    * 被调用一次,进行初始化

                    * @param stormConf 配置参数,可获得Topology的配置参数

                    * @param context 上下文(一般用不到)

                    * @param collector 发射器

                    */

                   public void prepare(Map stormConf, TopologyContext context,     OutputCollector collector) {

                            this.stormConf = stormConf;

                            this.context = context;

                            this.collector = collector;

                   }

 

                   int sum=0;

                   /**

                    * 死循环,读取数据

                    */

                   public void execute(Tuple input) {

                            Integer value = input.getIntegerByField("num"); //通过字段名来获取数据处理成功

                           

                            /* 正常的ack和fail的处理逻辑

                            try {

                                     //TODO                               

                                     this.collector.ack(input); //处理成功,会回调spout类的ack方法,

                            } catch (Exception e) {

                                     this.collector.fail(input);//处理失败,会回调soput类的fail方法

                            }*/

                           

                            //模拟的ackfail的处理逻辑

                            if (value >= 10 && value<=20){

                                     this.collector.ack(input); //处理成功,会回调spout类的ack方法,

                            } else {

                                     this.collector.fail(input);//处理失败,会回调soput类的fail方法

                            }

                           

                   }

 

                   /**

                    * 声明输出的内容(让后面的bolt知道这个bolt输出的是什么内容),

                    * 如果execute方法发射数据,declareOutputFields方法得声明字段名称

                    * 如果这是最后一个bolt就不用再向后发射数据了,因此就不用再declareOutputFields方法内声明字段名称

                    */

                   public void declareOutputFields(OutputFieldsDeclarer declarer) {

//                         declarer.declare(new Fields("testName"));

                   }

                  

         }

        

         /**

          * 通过topology将spout和bolt组装起来

          * @param args

          */

         public static void main(String[] args) {

                   TopologyBuilder topologyBuilder = new TopologyBuilder();

                   topologyBuilder.setSpout("spout_id", new DataSourceSpout());

                   topologyBuilder.setBolt("bolt_id",new SumBolt(),1).shuffleGrouping("spout_id"); //shuffleGrouping("spout_id") 指定上一个步骤

                  

                   //设置storm在本地运行

//               LocalCluster localCluster = new LocalCluster();

//               localCluster.submitTopology("topology", new Config(), topologyBuilder.createTopology());

                  

                   //设置storm在集群运行

                   String runname = ClusterStormTopologyAcker.class.getSimpleName();

                   try {

                            StormSubmitter.submitTopology(runname, new Config(),topologyBuilder.createTopology());

                   } catch (Exception e) {

                            e.printStackTrace();

                   }

         }

        

}

 

5      Storm UI 简介

Ø  Activate:激活  deactive:未激活(暂停)

Ø

Ø  emitted: emitted tuple数

Ø  transferred: transferred tuple数, 与emitted的区别:如果一个task,emitted一个tuple到2个task中,则transferred tuple数是emitted tuple数的两倍

Ø  complete latency: spout emitting 一个tuple到spout ack这个tuple的平均时间

Ø

 

Ø  process latency:   bolt收到一个tuple到bolt ack(bolt调用ack方法)这个tuple的平均时间

executelatency:bolt处理一个tuple的平均时间,不包含ack操作。

 

6      drpc

分布式远程过程调用

storm中的DRPC提供了集群中处理功能的访问接口。相当于集群向外暴露一个功能接口,用户可以在任何地方进行调用。

LinearDRPCTopologyBuilder已过时,被trident取代

 

本地模拟程序

public class LocalStormDrpc {

        

         public static class MyBolt extends BaseRichBolt{

                   private Map stormConf;

                   private TopologyContext context;

                   private OutputCollector collector;

                   public void prepare(Map stormConf, TopologyContext context,     OutputCollector collector) {

                            this.stormConf = stormConf;

                            this.context = context;

                            this.collector = collector;

                   }

                  

                   /**

                    * tuple中会传递过来连个参数

                    * 第一个表示是请求的ID,第二个表示是请求的参数

                    */

                   public void execute(Tuple input) {

                            String value = input.getString(1);

                            value = "hello "+value;

                           

                            this.collector.emit(new Values(input.getValue(0),value));

                   }

 

                   public void declareOutputFields(OutputFieldsDeclarer declarer) {

                            declarer.declare(new Fields("id","value"));

                   }

         }

        

         public static void main(String[] args) {

                   LinearDRPCTopologyBuilder linearDRPCTopologyBuilder = new LinearDRPCTopologyBuilder("hello");

                   linearDRPCTopologyBuilder.addBolt(new MyBolt());

                  

                   LocalCluster localCluster = new LocalCluster();

                   LocalDRPC drpc = new LocalDRPC();

                   localCluster.submitTopology("drpc",new Config(), linearDRPCTopologyBuilder.createLocalTopology(drpc));

                  

                   String result = drpc.execute("1", "storm");

                   System.err.println("客户端调用结果:"+result);

         }

 

}

 

Ø  远程调用,修改配置文件

vi/opt/storm/conf/storm.yaml

Ø  启动drpc

 

import java.util.Map;

import backtype.storm.Config;

import backtype.storm.LocalCluster;

import backtype.storm.LocalDRPC;

import backtype.storm.StormSubmitter;

import backtype.storm.drpc.LinearDRPCTopologyBuilder;

import backtype.storm.generated.AlreadyAliveException;

import backtype.storm.generated.InvalidTopologyException;

import backtype.storm.task.OutputCollector;

import backtype.storm.task.TopologyContext;

import backtype.storm.topology.OutputFieldsDeclarer;

import backtype.storm.topology.base.BaseRichBolt;

import backtype.storm.tuple.Fields;

import backtype.storm.tuple.Tuple;

import backtype.storm.tuple.Values;

 

public class ClusterStormDrpcServer {

        

         public static class MyBolt extends BaseRichBolt{

                   private Map stormConf;

                   private TopologyContext context;

                   private OutputCollector collector;

                   public void prepare(Map stormConf, TopologyContext context,

                                     OutputCollector collector) {

                            this.stormConf = stormConf;

                            this.context = context;

                            this.collector = collector;

                   }

                   /**

                    * tuple中会传递过来连个参数

                    * 第一个表示是请求的ID,第二个表示是请求的参数

                    */

                   public void execute(Tuple input) {

                            String value = input.getString(1);

                            value = "hello1 "+value;

                           

                            this.collector.emit(new Values(input.getValue(0),value));

                   }

 

                   public void declareOutputFields(OutputFieldsDeclarer declarer) {

                            declarer.declare(new Fields("id","value"));

                   }

         }

        

         public static void main(String[] args) {

                   LinearDRPCTopologyBuilder linearDRPCTopologyBuilder = new LinearDRPCTopologyBuilder("hello");

                   linearDRPCTopologyBuilder.addBolt(new MyBolt());

                  

                   try {

                            StormSubmitter.submitTopology("drpc", new  Config(), linearDRPCTopologyBuilder.createRemoteTopology());

                   } catch (AlreadyAliveException e) {

                            e.printStackTrace();

                   } catch (InvalidTopologyException e) {

                            e.printStackTrace();

                   }

         }

 

}

 

import org.apache.thrift7.TException;

import backtype.storm.generated.DRPCExecutionException;

import backtype.storm.utils.DRPCClient;

 

public class ClusterStormDrpcClient {

        

         public static void main(String[] args) {

                   DRPCClient drpcClient = new DRPCClient("192.168.1.170", 3772);

                   try {

                            String result = drpcClient.execute("hello", "aaaaa");

                           

                            System.out.println(result);

                   } catch (TException e) {

                            e.printStackTrace();

                   } catch (DRPCExecutionException e) {

                            e.printStackTrace();

                   }

                  

         }

 

}

 

7      事务

创建topology默认使用TopologyBuilder类, 带事务的使用TransactionTopologyBuilder类,但这个事务类已经过时,被trident取代

Storm里有ack和fail机构,通过ack和fail可以对数据进行控制,可以控制到每一个tuple,保证数据的事务性,每一个tuple都通过ack和fail管理事务消耗的时间会比较长。如果想提高性能,可以使用批量的事务控制。

解析阶段数据批次可以没有顺序,更新阶段数据是有顺序的(按批次编号的顺序)

 

import backtype.storm.Config;

import backtype.storm.LocalCluster;

import backtype.storm.coordination.BatchOutputCollector;

import backtype.storm.task.TopologyContext;

import backtype.storm.testing.MemoryTransactionalSpout;

import backtype.storm.topology.OutputFieldsDeclarer;

import backtype.storm.topology.base.BaseBatchBolt;

import backtype.storm.topology.base.BaseTransactionalBolt;

import backtype.storm.transactional.ICommitter;

import backtype.storm.transactional.TransactionAttempt;

import backtype.storm.transactional.TransactionalTopologyBuilder;

import backtype.storm.tuple.Fields;

import backtype.storm.tuple.Tuple;

import backtype.storm.tuple.Values;

 

import java.math.BigInteger;

import java.util.ArrayList;

import java.util.HashMap;

import java.util.List;

import java.util.Map;

 

/**

 * This is a basic example of a transactional topology. It keeps a count of the number of tuples seen so far in a

 * database. The source of data and the databases are mocked out as in memory maps for demonstration purposes. This

 * class is defined in depth on the wiki at https://github.com/nathanmarz/storm/wiki/Transactional-topologies

 */

public class TransactionalGlobalCount {

  public static final int PARTITION_TAKE_PER_BATCH = 3;

  public static final Map<Integer, List<List<Object>>> DATA = new HashMap<Integer, List<List<Object>>>() {{

    put(0, new ArrayList<List<Object>>() {{

      add(new Values("cat"));

      add(new Values("dog"));

      add(new Values("chicken"));

      add(new Values("cat"));

      add(new Values("dog"));

      add(new Values("apple"));

    }});

    put(1, new ArrayList<List<Object>>() {{

      add(new Values("cat"));

      add(new Values("dog"));

      add(new Values("apple"));

      add(new Values("banana"));

    }});

    put(2, new ArrayList<List<Object>>() {{

      add(new Values("cat"));

      add(new Values("cat"));

      add(new Values("cat"));

      add(new Values("cat"));

      add(new Values("cat"));

      add(new Values("dog"));

      add(new Values("dog"));

      add(new Values("dog"));

      add(new Values("dog"));

    }});

  }};

 

  public static class Value {

    int count = 0;

    BigInteger txid;

  }

 

  public static Map<String, Value> DATABASE = new HashMap<String, Value>();

  public static final String GLOBAL_COUNT_KEY = "GLOBAL-COUNT";

 

  public static class BatchCount extends BaseBatchBolt {  //BaseBatchBolt 批处理的bolt

    Object _id;

    BatchOutputCollector _collector;

 

    int _count = 0;

 

    @Override

    public void prepare(Map conf, TopologyContext context, BatchOutputCollector collector, Object id) {

      _collector = collector;

      _id = id;

    }

 

    @Override

    public void execute(Tuple tuple) {

      _count++;

    }

 

    @Override

    public void finishBatch() {  //最后才会把数据提交出去

      _collector.emit(new Values(_id, _count));

    }

 

    @Override

    public void declareOutputFields(OutputFieldsDeclarer declarer) {

      declarer.declare(new Fields("id", "count"));

    }

  }

 

  public static class UpdateGlobalCount extends BaseTransactionalBolt implements ICommitter {

    TransactionAttempt _attempt;

    BatchOutputCollector _collector;

 

    int _sum = 0;

 

    @Override

    public void prepare(Map conf, TopologyContext context, BatchOutputCollector collector, TransactionAttempt attempt) {

      _collector = collector;

      _attempt = attempt;

    }

 

    @Override

    public void execute(Tuple tuple) {

      _sum += tuple.getInteger(1);

    }

 

    @Override

    public void finishBatch() {  //提交的方法

      Value val = DATABASE.get(GLOBAL_COUNT_KEY);

      Value newval;

      if (val == null || !val.txid.equals(_attempt.getTransactionId())) {

        newval = new Value();

        newval.txid = _attempt.getTransactionId();

        if (val == null) {

          newval.count = _sum;

        }

        else {

          newval.count = _sum + val.count;

        }

        DATABASE.put(GLOBAL_COUNT_KEY, newval);

      }

      else {

        newval = val;

      }

      _collector.emit(new Values(_attempt, newval.count));

    }

 

    @Override

    public void declareOutputFields(OutputFieldsDeclarer declarer) {

      declarer.declare(new Fields("id", "sum"));

    }

  }

 

  public static void main(String[] args) throws Exception {

    MemoryTransactionalSpout spout = new MemoryTransactionalSpout(DATA, new Fields("word"), PARTITION_TAKE_PER_BATCH); //第一个参数:数据源  第二个参数:字段名 第三个参数:每一批里有几个数据(3个)

    TransactionalTopologyBuilder builder = new TransactionalTopologyBuilder("global-count", "spout", spout, 3);

    builder.setBolt("partial-count", new BatchCount(), 5).noneGrouping("spout");//5个bolt并行处理数据

    builder.setBolt("sum", new UpdateGlobalCount()).globalGrouping("partial-count");//更新的bolt,只有一个线程

 

    LocalCluster cluster = new LocalCluster();

 

    Config config = new Config();

    config.setDebug(true);

    config.setMaxSpoutPending(3);

 

    cluster.submitTopology("global-count-topology", config, builder.buildTopology());

 

    Thread.sleep(3000);

    cluster.shutdown();

  }

}

 

8      Trident

Ø  trident是对storm进行封装的一个框架

Ø  如果把storm当作servlet的话,可以把trident理解为struts框架

Ø  Trident

n  函数(function)

public class LocalTridentFunc {

        

         public static class PrintBolt extends BaseFunction{

 

                   @Override

                   public void execute(TridentTuple tuple, TridentCollector collector) {

                            Integer value = tuple.getInteger(0);

                            System.out.println(value);

                   }

                  

         }

        

         public static void main(String[] args) {

                   FixedBatchSpout spout  = new FixedBatchSpout(new Fields("sentence"), 1, new Values(1)); //字段名:sentence, 并行度:1, 内容:1

                   spout.setCycle(false); //true循环发送spout内容, false不循环发送spout内容

                  

                   TridentTopology tridentTopology = new TridentTopology();

                   tridentTopology.newStream("spout_id", spout) //指定spout

                                                                                     .each(new Fields("sentence"), //第一个参数:接收的字段,需与spout输出的字段对应

                                                                                              new PrintBolt(), //第二个参数:指定bolt PrintBolt

                                                                                              new Fields("")); //第三个参数:输出的字段

                  

                   LocalCluster localCluster = new LocalCluster();

                   localCluster.submitTopology("tridentTopology", //topology名称

                                                                           new Config(),  //配置信息

                                                                           tridentTopology.build()); //topology

         }

 

}

 

n  过滤器(filter)

public class LocalTridentFilter {

        

         public static class PrintBolt extends BaseFunction{

 

                   @Override

                   public void execute(TridentTuple tuple, TridentCollector collector) {

                            Integer value = tuple.getInteger(0);

                            System.out.println(value);

                   }

                  

         }

        

         public static class Filter extends BaseFilter{

 

                   @Override

                  public boolean isKeep(TridentTuple tuple) {

                            Integer value = tuple.getInteger(0);

                            Integer flag = value%2;

                            return flag==0?true:false;  //当数据返回true就可进入下一步,当返回false就不再往下传了

                   }

                  

         }

        

         public static void main(String[] args) {

                   FixedBatchSpout spout  = new FixedBatchSpout(new Fields("sentence"), //字段名:sentence

                                                                                                                                                                                                                                          1,  // 并行度:1,

                                                                                                                                                                                                                                          new Values(1),new Values(2),new Values(3)); //内容

                   spout.setCycle(false);//true循环发送spout内容, false不循环发送spout内容

                  

                   TridentTopology tridentTopology = new TridentTopology();

                   tridentTopology.newStream("spout_id", spout)//指定spout

                                                                                     .each(new Fields("sentence"),

                                                                                           new Filter())

                                                                                     .each(new Fields("sentence"),  //第一个参数:接收的字段,需与上个bout输出的字段对应

                                                                                           new PrintBolt(), //第二个参数:指定bolt PrintBolt

                                                                                           new Fields("")); //第三个参数:输出的字段

                  

                   LocalCluster localCluster = new LocalCluster();

                   localCluster.submitTopology("tridentTopology",new Config(), tridentTopology.build());

         }

 

}

n  连接(meger)

public class LocalTridentMerger {

        

         public static class PrintBolt extends BaseFunction{

 

                   @Override

                   public void execute(TridentTuple tuple, TridentCollector collector) {

                            Integer value = tuple.getInteger(0);

                            System.out.println(value);

                   }

         }

        

         public static void main(String[] args) {

                   FixedBatchSpout spout  = new FixedBatchSpout(new Fields("sentence"), 1, new Values(1));

                   spout.setCycle(false);

                  

                   TridentTopology tridentTopology = new TridentTopology();

                   Stream newStream = tridentTopology.newStream("spout_id", spout);

                   Stream newStream1 = tridentTopology.newStream("spout_id1", spout);

 

                   tridentTopology.merge(newStream,newStream1) //可将多个spout合并

                   .each(new Fields("sentence"), new PrintBolt(), new Fields(""));

                  

                   LocalCluster localCluster = new LocalCluster();

                   localCluster.submitTopology("tridentTopology",new Config(), tridentTopology.build());

         }

 

}

 

n  流分组(group by)

n  聚合(aggregate)

n  数字累加

n  单词计数

n        

Ø  官网例子

package cn.crxy.storm_example;

 

import backtype.storm.Config;

import backtype.storm.LocalCluster;

import backtype.storm.LocalDRPC;

import backtype.storm.StormSubmitter;

import backtype.storm.generated.StormTopology;

import backtype.storm.tuple.Fields;

import backtype.storm.tuple.Values;

import storm.trident.TridentState;

import storm.trident.TridentTopology;

import storm.trident.operation.BaseFunction;

import storm.trident.operation.TridentCollector;

import storm.trident.operation.builtin.Count;

import storm.trident.operation.builtin.FilterNull;

import storm.trident.operation.builtin.MapGet;

import storm.trident.operation.builtin.Sum;

import storm.trident.testing.FixedBatchSpout;

import storm.trident.testing.MemoryMapState;

import storm.trident.tuple.TridentTuple;

 

 

public class TridentWordCount {

  public static class Split extends BaseFunction { //BaseFunction 函数,相关于实现bolt, BaseRichBolt

    @Override

    public void execute(TridentTuple tuple, TridentCollector collector) {

      String sentence = tuple.getString(0);

      for (String word : sentence.split(" ")) {

        collector.emit(new Values(word));

      }

    }

  }

 

  public static StormTopology buildTopology(LocalDRPC drpc) {

      

       //Fields定义输出字段, 3 代码每一批里有3个tuple, 每个values都是一个tuple

    FixedBatchSpout spout = new FixedBatchSpout(new Fields("sentence"), 3, new Values("the cow jumped over the moon"),

        new Values("the man went to the store and bought some candy"), new Values("four score and seven years ago"),

        new Values("how many apples can you eat"), new Values("to be or not to be the person"));

    spout.setCycle(true); //此处设为true, 可使spout循环发送上面的数据

 

    TridentTopology topology = new TridentTopology();

    TridentState wordCounts = topology.newStream("spout1", spout) //获取一个数据流

                       .parallelismHint(16) //设置并行度

                       .each(new Fields("sentence"),//each相当于bolt操作,迭代spout内的所有数据

                                 new Split(),   //具体的bolt代码

                                 new Fields("word")) //声明一个输出字段

                 .groupBy(new Fields("word")) //对输出的word字段进行分组

                 .persistentAggregate(new MemoryMapState.Factory(), new Count(), new Fields("count")) //聚合,对word统计count, 输出count字段

                 .parallelismHint(16);

 

    topology.newDRPCStream("words", drpc) //发布一个drpc

                       .each(new Fields("args"),   //对传来的的参数进行迭代

                                 new Split(),  //执行bolt,切割

                                 new Fields("word"))

                       .groupBy(new Fields("word")) //分组

                       .stateQuery(wordCounts, //从wordcounts内进行查询

                                           new Fields("word"),

                                           new MapGet(),

                                           new Fields("count"))

                       .each(new Fields("count"), //再迭代

                            new FilterNull())

            .aggregate(new Fields("count"),

                                     new Sum(),

                                             new Fields("sum")); //聚合,求总合

    return topology.build();

  }

 

  public static void main(String[] args) throws Exception {

    Config conf = new Config();

    conf.setMaxSpoutPending(20);

    if (args.length == 0) {

      LocalDRPC drpc = new LocalDRPC();

      LocalCluster cluster = new LocalCluster();

      cluster.submitTopology("wordCounter", conf, buildTopology(drpc)); //提交一个drpc的客户端

      for (int i = 0; i < 100; i++) {

        System.out.println("DRPC RESULT: " + drpc.execute("words", "cat the dog jumped")); //进行调用

        Thread.sleep(1000);

      }

    }

    else {

      conf.setNumWorkers(3);

      StormSubmitter.submitTopologyWithProgressBar(args[0], conf, buildTopology(null));

    }

  }

}

原创粉丝点击