kafka java demo
来源:互联网 发布:淘宝客贷款哪里进 编辑:程序博客网 时间:2024/06/06 05:38
Kafka学习8_kafka java 生产消费程序demo示例
kafka是吞吐量巨大的一个消息系统,它是用scala写的,和普通的消息的生产消费还有所不同,写了个demo程序供大家参考。kafka的安装请参考官方文档。
首先我们需要新建一个maven项目,然后在pom中引用kafka jar包,引用依赖如下:
- <dependency>
- <groupId>org.apache.kafka</groupId>
- <artifactId>kafka_2.11</artifactId>
- <version>0.8.2.1</version>
- </dependency>
我们用的版本是0.8, 下面我们看下生产消息的代码:
- package com.telewave.kafka.util;
-
- import java.util.Properties;
-
- import kafka.javaapi.producer.Producer;
-
- import kafka.producer.KeyedMessage;
-
- import kafka.producer.ProducerConfig;
-
- /**
- *
- * Hello world!
- *
- *
- */
-
- public class KafkaProducer
-
- {
-
- private final Producer<String, String> producer;
-
- public final static String TOPIC = "TestTopic";
-
- private KafkaProducer() {
-
- Properties props = new Properties();
-
- // 此处配置的是kafka的端口
-
- props.put("metadata.broker.list", "192.168.168.200:9092");
-
- // 配置value的序列化类
-
- props.put("serializer.class", "kafka.serializer.StringEncoder");
-
- // 配置key的序列化类
-
- props.put("key.serializer.class", "kafka.serializer.StringEncoder");
-
- // request.required.acks
-
- // 0, which means that the producer never waits for an acknowledgement
- // from the broker (the same behavior as 0.7). This option provides the
- // lowest latency but the weakest durability guarantees (some data will
- // be lost when a server fails).
-
- // 1, which means that the producer gets an acknowledgement after the
- // leader replica has received the data. This option provides better
- // durability as the client waits until the server acknowledges the
- // request as successful (only messages that were written to the
- // now-dead leader but not yet replicated will be lost).
-
- // -1, which means that the producer gets an acknowledgement after all
- // in-sync replicas have received the data. This option provides the
- // best durability, we guarantee that no messages will be lost as long
- // as at least one in sync replica remains.
-
- props.put("request.required.acks", "-1");
-
- producer = new Producer<String, String>(new ProducerConfig(props));
-
- }
-
- void produce() {
-
- int messageNo = 1000;
-
- final int COUNT = 10000;
-
- while (messageNo < COUNT) {
-
- String key = String.valueOf(messageNo);
-
- String data = "hello kafka message " + key;
-
- producer.send(new KeyedMessage<String, String>(TOPIC, key, data));
-
- System.out.println(data);
-
- messageNo++;
-
- }
-
- }
-
- public static void main(String[] args)
-
- {
-
- new KafkaProducer().produce();
-
- }
-
- }
下面是消费端的代码实现:
- package com.telewave.kafka.util;
- import java.util.HashMap;
- import java.util.List;
- import java.util.Map;
- import java.util.Properties;
- import org.apache.kafka.clients.producer.KafkaProducer;
- import kafka.consumer.ConsumerConfig;
- import kafka.consumer.ConsumerIterator;
- import kafka.consumer.KafkaStream;
- import kafka.javaapi.consumer.ConsumerConnector;
- import kafka.serializer.StringDecoder;
- import kafka.utils.VerifiableProperties;
- public class KafkaConsumer {
- private final ConsumerConnector consumer;
- public KafkaConsumer() {
- Properties props = new Properties();
- // zookeeper 配置
- props.put("zookeeper.connect", "192.168.168.200:2181");
- // group 代表一个消费组
- props.put("group.id", "jd-group");
- // zk连接超时
- props.put("zookeeper.session.timeout.ms", "4000");
- props.put("zookeeper.sync.time.ms", "200");
- props.put("auto.commit.interval.ms", "1000");
- props.put("auto.offset.reset", "largest");
- // 序列化类
- props.put("serializer.class", "kafka.serializer.StringEncoder");
- ConsumerConfig config = new ConsumerConfig(props);
- consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config);
- }
- public void consume() {
- Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
- topicCountMap.put("TestTopic", new Integer(1));
- StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
- StringDecoder valueDecoder = new StringDecoder(
- new VerifiableProperties());
- Map<String, List<KafkaStream<String, String>>> consumerMap =
- consumer.createMessageStreams(topicCountMap, keyDecoder, valueDecoder);
- KafkaStream<String, String> stream = consumerMap.get(
- "TestTopic").get(0);
- ConsumerIterator<String, String> it = stream.iterator();
- while (it.hasNext())
- System.out.println(it.next().message());
- }
- public static void main(String[] args) {
- new KafkaConsumer().consume();
- }
- }
阅读全文
0 0
- kafka java demo
- kafka java 生产消费demo
- kafka (java API demo)
- Kafka 安装部署&java demo
- kafka java 生产者消费者demo
- kafka消息demo的java实现
- kafka java 生产消费程序demo示例
- kafka java 实现消息队列demo
- kafka java 生产消费程序demo示例
- kafka java 生产消费程序demo示例
- kafka java 生产消费程序demo示例
- kafka java 生产消费程序demo示例
- Kafka 生产者和消费者 demo (java&scala)
- kafka java 生产消费程序demo示例
- kafka demo
- Kafka学习8_kafka java 生产消费程序demo示例
- kafka strom elasticsearch demo
- Kafka简单测试demo
- java中对象的关系
- 析构函数声明为私有的作用
- 现代操作系统之文件系统(中)
- Oracle中将毫秒类型时间转换以及取到毫秒时间类型的解决办法
- 如何写出优美的C代码
- kafka java demo
- Android 各类 Adapter 封装
- netty学习笔记
- Kaldi中如何使用已经训练好的模型进行语音识别ASR呢?
- 升级Gradle 3.0遇到的坑
- 我心故我在
- 高阶函数式编程
- Java 多线程读取一个文件
- 你总要一个人 走过一段艰难的日子