《Storm入门》总结

来源:互联网 发布:香港警匪片推荐知乎 编辑:程序博客网 时间:2024/06/05 10:54

1.本质是什么?

 1.1 流式处理,source为spout,之后为bolt,对于一个消息流来说它经历一个spout和n个bolt串联处理;

 1.2 topology结构,基于topology定义数据流向。

2.第一原则是什么?

   

3.知识结构是怎样的?

 3.1 topology

  3.1.1随机数据流组

 builder.setBolt("word-normalizer", new WordNormalizer()).shuffleGrouping("word-reader");

  3.1.2域数据流组

    builder.setBolt("word-counter", new WordCounter(),2)           .fieldsGrouping("word-normalizer", new Fields("word"));

  3.1.3 全部数据流组

    为每个接收数据实例复制一份元组副本,一般用于向bolts发送信号。

    builder.setBolt("word-counter", new WordCounter(),2)           .fieldsGroupint("word-normalizer",new Fields("word"))           .allGrouping("signals-spout","signals");
//bolts.java    public void execute(Tuple input) {        String str = null;        try{            if(input.getSourceStreamId().equals("signals")){                str = input.getStringByField("action");                if("refreshCache".equals(str))                    counters.clear();            }        }catch (IllegalArgumentException e){            //什么也不做        }        ···    }

   3.1.4 自定义数据流组

//ModuleGrouping.java    public class ModuleGrouping implements CustormStreamGrouping, Serializable{        int numTasks = 0;        @Override        public List<Integer> chooseTasks(List<Object> values) {            List<Integer> boltIds = new ArrayList<Integer>();            if(values.size()>0){                String str = values.get(0).toString();                if(str.isEmpty()){                    boltIds.add(0);                }else{                    boltIds.add(str.charAt(0) % numTasks);                }            }            return boltIds;        }        @Override        public void prepare(TopologyContext context, Fields outFields, List<Integer> targetTasks) {            numTasks = targetTasks.size();        }    }

    //TopologyMain.java    builder.setBolt("word-normalizer", new WordNormalizer())           .customGrouping("word-reader", new ModuleGrouping());

   3.1.5 直接数据流组

    数据源可以决定哪个组件接收元组。使用emitDirect方法代替emit。

   3.1.6 全局数据流组

    把所有数据源创建的元组发送给单一目标实例。(?)

   3.1.7 不分组(相当于随机)

   3.1.8 LocalCluster VS StormSubmitter

    //LocalCluster cluster = new LocalCluster();    //cluster.submitTopology("Count-Word-Topology-With-Refresh-Cache", conf,     //builder.createTopology());    StormSubmitter.submitTopology("Count-Word-Topology-With_Refresh-Cache", conf,            builder.createTopology());    //Thread.sleep(1000);    //cluster.shutdown();

   压缩jar包

mvn package

   提交拓扑

storm jar allmycode.jar org.me.MyTopology arg1 arg2 arg3

   3.1.9 DRPC(Distributed Remote Procedure Call 分布式远程过程调用)

   工具一:DRPC服务器

    作为拓扑的Spout数据源,像是客户端和Storm拓扑之间的连接器。该服务器接收一个执行函数和函数参数,并为函数操作的每一个数据块分配一个请求ID来识别RPC请求。拓扑执行到最后的bolt时,它必须分配RPC请求ID和结果,是DRPC服务器把结果正确返回给客户端。


   工具二: LineDRPCTopologyBuilder

    生成的拓扑创建DRPCSpouts——它连接到DRPC服务器并向拓扑的其它部分分发数据——并包装bolts,使结果从最后一个bolt返回。依次执行所有添加到LinearDRPCTopologyBuilder对象的bolts

//DRPCTopologyMain.javapackage drpc;import backtype.storm.Config;import backtype.storm.LocalCluster;import backtype.storm.LocalDRPC;import backtype.storm.drpc.LinearDRPCTopologyBuilder;import backtype.storm.utils.DRPCClient;public class DRPCTopologyMain {public static void main(String[] args) {//Create the local drpc client/serverLocalDRPC drpc = new LocalDRPC();//Create the drpc topologyLinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("add");builder.addBolt(new AdderBolt(),2);Config conf = new Config();conf.setDebug(true);//Create cluster and submit the topologyLocalCluster cluster = new LocalCluster();cluster.submitTopology("drpc-adder-topology", conf, builder.createLocalTopology(drpc));//Test the topologyString result = drpc.execute("add", "1+-1");checkResult(result,0);result = drpc.execute("add", "1+1+5+10");//Finish and shutdowncheckResult(result,17);cluster.shutdown();drpc.shutdown();}private static boolean checkResult(String result, int expected) {if(result != null && !result.equals("NULL")){if(Integer.parseInt(result) == expected){System.out.println("Add valid [result: "+result+"]");return true;}else{System.err.print("Invalid result ["+result+"]");}}else{System.err.println("There was an error running the drpc call");}return false;}}


 3.2 Spout

  3.2.1 可靠性

    使用collector.ack(),并实现ack(),fail()函数,如果继承BaseRichSpout,会自动实现collector.ack()。

package banktransactions;import java.util.HashMap;import java.util.Map;import java.util.Random;import org.apache.log4j.Logger;import backtype.storm.spout.SpoutOutputCollector;import backtype.storm.task.TopologyContext;import backtype.storm.topology.OutputFieldsDeclarer;import backtype.storm.topology.base.BaseRichSpout;import backtype.storm.tuple.Fields;import backtype.storm.tuple.Values;public class TransactionsSpouts extends BaseRichSpout{private static final Integer MAX_FAILS = 2;Map<Integer,String> messages;Map<Integer,Integer> transactionFailureCount;Map<Integer,String> toSend;private SpoutOutputCollector collector;  static Logger LOG = Logger.getLogger(TransactionsSpouts.class);public void ack(Object msgId) {messages.remove(msgId);LOG.info("Message fully processed ["+msgId+"]");}public void close() {}public void fail(Object msgId) {if(!transactionFailureCount.containsKey(msgId))throw new RuntimeException("Error, transaction id not found ["+msgId+"]");Integer transactionId = (Integer) msgId;//Get the transactions failInteger failures = transactionFailureCount.get(transactionId) + 1;if(failures >= MAX_FAILS){//If exceeds the max fails will go down the topologythrow new RuntimeException("Error, transaction id ["+transactionId+"] has had many errors ["+failures+"]");}//If not exceeds the max fails we save the new fails quantity and re-send the message transactionFailureCount.put(transactionId, failures);toSend.put(transactionId,messages.get(transactionId));LOG.info("Re-sending message ["+msgId+"]");}public void nextTuple() {if(!toSend.isEmpty()){for(Map.Entry<Integer, String> transactionEntry : toSend.entrySet()){Integer transactionId = transactionEntry.getKey();String transactionMessage = transactionEntry.getValue();collector.emit(new Values(transactionMessage),transactionId);}/* * The nextTuple, ack and fail methods run in the same loop, so * we can considerate the clear method atomic */toSend.clear();}try {Thread.sleep(1);} catch (InterruptedException e) {}}public void open(Map conf, TopologyContext context,SpoutOutputCollector collector) {Random random = new Random();messages = new HashMap<Integer, String>();toSend = new HashMap<Integer, String>();transactionFailureCount = new HashMap<Integer, Integer>();for(int i = 0; i< 100; i++){messages.put(i, "transaction_"+random.nextInt());transactionFailureCount.put(i, 0);}toSend.putAll(messages);this.collector = collector;}public void declareOutputFields(OutputFieldsDeclarer declarer) {declarer.declare(new Fields("transactionMessage"));}}


 3.2.2 获取数据:直接连接

  3.2.2.1 直接连接

    3.2.2.2 直连hash

    多个spout从同一个流读取数据的不同部分。(spouts/bolts)可以通过TopologyContext获取该组件的信息,并根据不同组件实现不同的操作。


   3.2.2.3 直连协同

   连接未知设备,可以使用协同系统维护设备列表,探查列表变化,并根据变化创建和销毁连接。比如,从web服务器收集日志文件时,web服务器列表可能随着时间变化。当添加一台web服务器时,协同系统探查到变化并为它创建一个新的spout



 3.2.3 获取数据:消息队列

  优点:利用多队列系统的重播能力增强队列的可靠性;不需要知道分发器的任何信息;添加删除分发器比直接连接简单的多。

  确定:增加一个故障点。

  建议:不推荐在spout创建太多线程,因为每个spout都运行在不同的线程。一个更好的替代方案是增加拓扑并行性,也就是通过Storm集群在分布式环境创建更多线程。


//在spout的open方法创建一个线程,用来获取消息new Thread(new Runnable() {    @Override    public void run() {        try{           Jedis client= new Jedis(redisHost, redisPort);           List res = client.blpop(Integer.MAX_VALUE, queues);           messages.offer(res.get(1));        }catch(Exception e){            LOG.error("从redis读取队列出错",e);            try {                Thread.sleep(100);            }catch(InterruptedException e1){}        }    }}).start();

//在nextTuple方法中,要做的惟一的事情就是从内部消息队列获取消息并再次分发它们。public void nextTuple(){    while(!messages.isEmpty()){        collector.emit(new Values(messages.poll()));    }}

   3.2.4 DRPC

    DRPCSpout从DRPC服务器接收一个函数调用,并执行它。对于最常见的情况,使用backtype.storm.drpc.DRPCSpout就足够了,不过仍然有可能利用Storm包内的DRPC类创建自己的实现。


0 0
原创粉丝点击