storm trident 笔记

来源:互联网 发布:win7 录屏软件 编辑:程序博客网 时间:2024/05/21 19:31

要了解trident的用法和原理,需要先了解trident的state;

trident state  

 

  Trident provides a fully fledged batch processing API to process those small batches. The API is very similar to what you see in high level abstractions for Hadoop like Pig or Cascading: you can do group by's, joins, aggregations, run functions, run filters, and so on. Of course, processing each small batch in isolation isn't that interesting, so Trident provides functions for doing aggregations across batches and persistently storing those aggregations - whether in memory, in Memcached, in Cassandra, or some other store. Finally, Trident has first-class functions for querying sources of realtime state. That state could be updated by Trident (like in this example), or it could be an independent source of state.

    Swapping topology to store result in Memcached is as simple as replacing the persistentAggregate line with this (usingtrident-memcached), where the "serverLocations" variable is a list of host/ports for the Memcached cluster:

     .persistentAggregate(MemcachedState.transactional(serverLocations), new Count(), new Fields("result"))

   

    One of the cool things about Trident is that it has fully fault-tolerant, exactly-once processing semantics. This makes it easy to reason about your realtime processing. Trident persists state in a way so that if failures occur and retries are necessary, it won't perform multiple updates to the database for the same source data.

   

    The persistentAggregate method transforms a Stream into a TridentState object. In this case the TridentState object represents all the word counts. We will use this TridentState object to implement the distributed query portion of the computation.

 

    Trident is intelligent about how it executes a topology to maximize performance. There's two interesting things happening automatically in this topology:

  1. Operations that read from or write to state (like persistentAggregate and stateQuery) automatically batch operations to that state. So if there's 20 updates that need to be made to the database for the current batch of processing, rather than do 20 read requests and 20 write requests to the database, Trident will automatically batch up the reads and writes, doing only 1 read request and 1 write request (and in many cases, you can use caching in your State implementation to eliminate the read request). So you get the best of both words of convenience - being able to express your computation in terms of what should be done with each tuple - and performance.

  2. Trident aggregators are heavily optimized. Rather than transfer all tuples for a group to the same machine and then run the aggregator, Trident will do partial aggregations when possible before sending tuples over the network. For example, the Count aggregator computes the count on each partition, sends the partial count over the network, and then sums together all the partial counts to get the total count. This technique is similar to the use of combiners in MapReduce.

 

原创粉丝点击