storm kafka插件使用案例
来源:互联网 发布:js怎么获取数据库数据 编辑:程序博客网 时间:2024/06/12 02:42
一、pom引用
<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>jiankunking</groupId> <artifactId>kafkastorm</artifactId> <version>1.0-SNAPSHOT</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <url>http://blog.csdn.net/jiankunking</url> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <!--<scope>test</scope>--> </dependency> <dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-core</artifactId> <version>1.1.0</version> <!--本地调试的时候,屏蔽掉scope,等打包部署的时候再放开--> <!--<scope>provided</scope>--> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>0.10.1.1</version> <exclusions> <exclusion> <groupId>org.apache.zookeeper</groupId> <artifactId>zookeeper</artifactId> </exclusion> <exclusion> <groupId>log4j</groupId> <artifactId>log4j</artifactId> </exclusion> <exclusion> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-kafka</artifactId> <version>1.1.0</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.3.3</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build></project>
二、自定义bolt
package com.jiankunking.stormkafka.bolts;import org.apache.storm.topology.BasicOutputCollector;import org.apache.storm.topology.OutputFieldsDeclarer;import org.apache.storm.topology.base.BaseBasicBolt;import org.apache.storm.tuple.Tuple;/** * Created by jiankunking on 2017/4/29 11:15. */public class CustomBolt extends BaseBasicBolt { public void execute(Tuple input, BasicOutputCollector collector) { String sentence = input.getString(0); System.out.println(sentence); } public void declareOutputFields(OutputFieldsDeclarer declarer) { System.out.println("declareOutputFields"); }}
三、自定义Scheme
package com.jiankunking.stormkafka.schemes;import org.apache.storm.spout.Scheme;import org.apache.storm.tuple.Fields;import org.apache.storm.tuple.Values;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import java.nio.ByteBuffer;import java.nio.CharBuffer;import java.nio.charset.Charset;import java.nio.charset.CharsetDecoder;import java.util.List;/** * Created by jiankunking on 2017/4/22 10:52. */public class MessageScheme implements Scheme { private static final Logger LOGGER; static { LOGGER = LoggerFactory.getLogger(MessageScheme.class); } public List<Object> deserialize(ByteBuffer byteBuffer) { String msg = this.getString(byteBuffer); return new Values(msg); } public Fields getOutputFields() { return new Fields("msg"); } private String getString(ByteBuffer buffer) { Charset charset = null; CharsetDecoder decoder = null; CharBuffer charBuffer = null; try { charset = Charset.forName("UTF-8"); decoder = charset.newDecoder(); //用这个的话,只能输出来一次结果,第二次显示为空 // charBuffer = decoder.decode(buffer); charBuffer = decoder.decode(buffer.asReadOnlyBuffer()); return charBuffer.toString(); } catch (Exception ex) { LOGGER.error("Cannot parse the provided message!" + ex.toString()); return "error"; } }}
四、自定义拓扑图入口类
package com.jiankunking.stormkafka.topologies;import com.jiankunking.stormkafka.bolts.CustomBolt;import com.jiankunking.stormkafka.schemes.MessageScheme;import com.jiankunking.stormkafka.util.PropertiesUtil;import org.apache.storm.Config;import org.apache.storm.LocalCluster;import org.apache.storm.StormSubmitter;import org.apache.storm.generated.AlreadyAliveException;import org.apache.storm.generated.AuthorizationException;import org.apache.storm.generated.InvalidTopologyException;import org.apache.storm.kafka.BrokerHosts;import org.apache.storm.kafka.KafkaSpout;import org.apache.storm.kafka.SpoutConfig;import org.apache.storm.kafka.ZkHosts;import org.apache.storm.spout.SchemeAsMultiScheme;import org.apache.storm.topology.TopologyBuilder;import java.util.Arrays;import java.util.Map;/** * Created by jiankunking on 2017/4/19 16:27. */public class CustomCounterTopology { /** * 入口类,即提交任务的类 * * @throws InterruptedException * @throws AlreadyAliveException * @throws InvalidTopologyException */ public static void main(String[] args) throws AlreadyAliveException, InvalidTopologyException { System.out.println("11111"); PropertiesUtil propertiesUtil = new PropertiesUtil("/application.properties", false); Map propsMap = propertiesUtil.getAllProperty(); String zks = propsMap.get("zk_hosts").toString(); String topic = propsMap.get("kafka.topic").toString(); String zkRoot = propsMap.get("zk_root").toString(); String zkPort = propsMap.get("zk_port").toString(); String zkId = propsMap.get("zk_id").toString(); BrokerHosts brokerHosts = new ZkHosts(zks); SpoutConfig spoutConfig = new SpoutConfig(brokerHosts, topic, zkRoot, zkId); spoutConfig.zkServers = Arrays.asList(zks.split(",")); if (zkPort != null && zkPort.length() > 0) { spoutConfig.zkPort = Integer.parseInt(zkPort); } else { spoutConfig.zkPort = 2181; } spoutConfig.scheme = new SchemeAsMultiScheme(new MessageScheme()); TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("kafkaSpout", new KafkaSpout(spoutConfig)); builder.setBolt("customCounterBolt", new CustomBolt(), 1).shuffleGrouping("kafkaSpout"); //Configuration Config conf = new Config(); conf.setDebug(false); if (args != null && args.length > 0) { //提交到集群运行 try { StormSubmitter.submitTopologyWithProgressBar("customCounterTopology", conf, builder.createTopology()); } catch (AlreadyAliveException e) { e.printStackTrace(); } catch (InvalidTopologyException e) { e.printStackTrace(); } catch (AuthorizationException e) { e.printStackTrace(); } } else { conf.setMaxTaskParallelism(3); //本地模式运行 LocalCluster cluster = new LocalCluster(); cluster.submitTopology("CustomCounterTopology", conf, builder.createTopology()); } }}
五、配置文件application.properties
kafka.topic=test_one# zookeeperzk_hosts=10.10.10.10zk_root=/kafkazk_port=2181# kafka消费组zk_id="kafkaspout"
demo下载地址:http://download.csdn.net/detail/xunzaosiyecao/9829058
https://github.com/JianKunKing/storm-kafka-plugin-demo
作者:jiankunking 出处:http://blog.csdn.net/jiankunking
0 0
- storm kafka插件使用案例
- jstorm kafka插件使用案例
- storm和kafka整合案例
- storm-kafka源代码阅读和使用
- 使用storm trident消费kafka消息
- flume+kafka+storm的集成使用
- 【storm kafka】storm kafka集成
- kafka HighLevelConsumer API 使用案例
- Kafka+Storm+HBase项目Demo(4)--Kafka使用
- logstash的kafka插件使用
- Logstash的kafka插件使用
- Storm Kafka + Storm + HBase实例
- kafka案例
- kafka以及storm-kafka整合
- Storm-Kafka使用笔记(一):Scheme和Mapper
- Kafka+Storm+HBase项目Demo(7)--Trident使用
- Kafka+Storm+HBase项目Demo(5)--topology,spout,bolt使用
- flume+kafka+storm
- 使用wamp搭建本地服务器及本地服务器的访问
- 2017算法实习生应聘经验总结
- 中超赛程分析(4)--山东鲁能队
- 基础DP-第一类斯特林数-递推-hdu-3625-挑战程序设计2-q
- kali install pip3
- storm kafka插件使用案例
- Google protocol buffer简介
- ElasticSearch 增删改查
- Attacks on TCP/IP Protocols (Task2) ICMP Redirect Attack
- 掌握技能的学习整理规划
- 简单ListView列表,就用简单的写法
- WINDOWS访问SAMBA提示没有权限
- Html和css常用个别英语单词
- 常用排序算法代码整理