Kafka精确一次
来源:互联网 发布:如何找到域名的ip地址 编辑:程序博客网 时间:2024/06/01 09:49
kafka是一个高性能的消息中间件,支持实时,批量,和流处理方式,现已被很多公司应用于web级别的应用上。本篇文章展示了怎么利用kafka的api来创建客户端程序,并且展示了三种语义的客户端的创建方法:至多一次(at-most-once),至少一次( at-least-once),精确一次(and exactly-once )。
首先,在你本机上安装kafka,快速开始点这里,这里假设你们已经安装了kafka并且在运行,Zookeeper 默认端口2181, kafka默认端口 9092. 当kafka运行起来之后,创建一个名为”normal-topic”,分区数为2的topic,命令如下:
bin/kafka-topics –zookeeper localhost:2181 –create –topic normal-topic –partitions 2 –replication-factor 1
查看创建的topic状态:
bin/kafka-topics –list –topic normal-topic –zookeeper localhost:2181
好了,前提步骤做完,接下来是创建kafka客户端。
1.Producer
producer是往topic里发送消息的,consumer则负责接受topic的消息并进行处理,producer代码如下:
public class ProducerExample { public static void main(String[] str) throws InterruptedException, IOException { System.out.println("Starting ProducerExample ..."); sendMessages(); } private static void sendMessages() throws InterruptedException, IOException { Producer<String, String> producer = createProducer(); sendMessages(producer); // Allow the producer to complete sending of the messages before program exit. Thread.sleep(20); } private static Producer<String, String> createProducer() { Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); props.put("acks", "all"); props.put("retries", 0); // 批量发送个数 props.put("batch.size", 10); props.put("linger.ms", 1); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); return new KafkaProducer(props); } private static void sendMessages(Producer<String, String> producer) { String topic = "normal-topic"; int partition = 0; long record = 1; for (int i = 1; i <= 10; i++) { producer.send( new ProducerRecord<String, String>(topic, partition, Long.toString(record),Long.toString(record++))); } }}
2.Consumer
consumer注册到kafka的几种方法:
>1. subscribe, 当有consumer通过subcscribe注册的时候,kafka会进行负载均衡,(topic的增加或删除也会导致负载均衡)这个方法有两个:
(a) 2参数,一个topic, 一个listener. 以这种方式注册,当负载均衡的时候,kafka会通知这个consumer. 这个listener可以使手动管理offset(要做到精确一次必须手动管理offset,至少现在版本是)
(b)只有一个topic参数,
>2. assign方法。使用这个方法,kafka不会进行负载均衡。
下面的至少一次,至多一次,都是用的1(b)中的方法,精确一次有两种方式,1(a)和 2。
至多一次 (0或1次)
kafka consumer是默认至多一次,consumer的配置是:
>1. 设置‘enable.auto.commit’ 为 true.
>2. 设置 ‘auto.commit.interval.ms’ 为一个较小的值.
>3. consumer不去执行 consumer.commitSync(), 这样, Kafka 会每隔一段时间自动提交offset。
public class AtMostOnceConsumer { public static void main(String[] str) throws InterruptedException { System.out.println("Starting AtMostOnceConsumer ..."); execute(); } private static void execute() throws InterruptedException { KafkaConsumer<String, String> consumer = createConsumer(); // Subscribe to all partition in that topic. 'assign' could be used here // instead of 'subscribe' to subscribe to specific partition. consumer.subscribe(Arrays.asList("normal-topic")); processRecords(consumer); } private static KafkaConsumer<String, String> createConsumer() { Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); String consumeGroup = "cg1"; props.put("group.id", consumeGroup); // Set this property, if auto commit should happen. props.put("enable.auto.commit", "true"); // Auto commit interval, kafka would commit offset at this interval. props.put("auto.commit.interval.ms", "101"); // This is how to control number of records being read in each poll props.put("max.partition.fetch.bytes", "135"); // Set this if you want to always read from beginning. // props.put("auto.offset.reset", "earliest"); props.put("heartbeat.interval.ms", "3000"); props.put("session.timeout.ms", "6001"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); return new KafkaConsumer<String, String>(props); } private static void processRecords(KafkaConsumer<String, String> consumer) { while (true) { ConsumerRecords<String, String> records = consumer.poll(100); long lastOffset = 0; for (ConsumerRecord<String, String> record : records) { System.out.printf("\n\roffset = %d, key = %s, value = %s", record.offset(), record.key(), record.value()); lastOffset = record.offset(); } System.out.println("lastOffset read: " + lastOffset); process(); } } private static void process() throws InterruptedException { // create some delay to simulate processing of the message. Thread.sleep(20); }}
至少一次 (一或多次)
>1. 设置‘enable.auto.commit’ 为 false 或者
设置‘enable.auto.commit’ 为 true 并设置‘auto.commit.interval.ms’ 为一个较大的值.
>2. 处理完后consumer调用 consumer.commitSync()
public class AtLeastOnceConsumer { public static void main(String[] str) throws InterruptedException { System.out.println("Starting AutoOffsetGuranteedAtLeastOnceConsumer ..."); execute(); } private static void execute() throws InterruptedException { KafkaConsumer<String, String> consumer = createConsumer(); // Subscribe to all partition in that topic. 'assign' could be used here // instead of 'subscribe' to subscribe to specific partition. consumer.subscribe(Arrays.asList("normal-topic")); processRecords(consumer); } private static KafkaConsumer<String, String> createConsumer() { Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); String consumeGroup = "cg1"; props.put("group.id", consumeGroup); // Set this property, if auto commit should happen. props.put("enable.auto.commit", "true"); // Make Auto commit interval to a big number so that auto commit does not happen, // we are going to control the offset commit via consumer.commitSync(); after processing // message. props.put("auto.commit.interval.ms", "999999999999"); // This is how to control number of messages being read in each poll props.put("max.partition.fetch.bytes", "135"); props.put("heartbeat.interval.ms", "3000"); props.put("session.timeout.ms", "6001"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer"); return new KafkaConsumer<String, String>(props); } private static void processRecords(KafkaConsumer<String, String> consumer) throws { while (true) { ConsumerRecords<String, String> records = consumer.poll(100); long lastOffset = 0; for (ConsumerRecord<String, String> record : records) { System.out.printf("\n\roffset = %d, key = %s, value = %s", record.offset(), record.key(), record.value()); lastOffset = record.offset(); } System.out.println("lastOffset read: " + lastOffset); process(); // Below call is important to control the offset commit. Do this call after you // finish processing the business process. consumer.commitSync(); } } private static void process() throws InterruptedException { // create some delay to simulate processing of the record. Thread.sleep(20); }}
精确一次
下例展示了kafka的精确一次语义,consumer通过subscribe方法注册到kafka,精确一次的语义要求必须手动管理offset,按照下述步骤进行设置:
1.设置enable.auto.commit = false;
2.处理完消息之后不要手动提交offset,
3.通过subscribe方法将consumer注册到某个特定topic,
4.实现ConsumerRebalanceListener接口和consumer.seek(topicPartition,offset)方法(读取特定topic和partition的offset)
5.将offset和消息一块存储,确保原子性,推荐使用事务机制。
public class ExactlyOnceDynamicConsumer { private static OffsetManager offsetManager = new OffsetManager("storage2"); public static void main(String[] str) throws InterruptedException { System.out.println("Starting ManualOffsetGuaranteedExactlyOnceReadingDynamicallyBalancedPartitionConsumer ..."); readMessages(); } /** */ private static void readMessages() throws InterruptedException { KafkaConsumer<String, String> consumer = createConsumer(); // Manually controlling offset but register consumer to topics to get dynamically assigned partitions. // Inside MyConsumerRebalancerListener use consumer.seek(topicPartition,offset) to control offset consumer.subscribe(Arrays.asList("normal-topic"), new MyConsumerRebalancerListener(consumer)); processRecords(consumer); } private static KafkaConsumer<String, String> createConsumer() { Properties props = new Properties(); props.put("bootstrap.servers", "localhost:9092"); String consumeGroup = "cg3"; props.put("group.id", consumeGroup); // Below is a key setting to turn off the auto commit. props.put("enable.auto.commit", "false"); props.put("heartbeat.interval.ms", "2000"); props.put("session.timeout.ms", "6001"); // Control maximum data on each poll, make sure this value is bigger than the maximum single record size props.put("max.partition.fetch.bytes", "140"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); return new KafkaConsumer<String, String>(props); } private static void processRecords(KafkaConsumer<String, String> consumer) { while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) { System.out.printf("offset = %d, key = %s, value = %s\n", record.offset(), record.key(), record.value()); offsetManager.saveOffsetInExternalStore(record.topic(), record.partition(), record.offset()); } } }}public class MyConsumerRebalancerListener implements org.apache.kafka.clients.consumer.ConsumerRebalanceListener { private OffsetManager offsetManager = new OffsetManager("storage2"); private Consumer<String, String> consumer; public MyConsumerRebalancerListener(Consumer<String, String> consumer) { this.consumer = consumer; } public void onPartitionsRevoked(Collection<TopicPartition> partitions) { for (TopicPartition partition : partitions) { offsetManager.saveOffsetInExternalStore(partition.topic(), partition.partition(), consumer.position(partition)); } } public void onPartitionsAssigned(Collection<TopicPartition> partitions) { for (TopicPartition partition : partitions) { consumer.seek(partition, offsetManager.readOffsetFromExternalStore(partition.topic(), partition.partition())); } }}public class OffsetManager { private String storagePrefix; public OffsetManager(String storagePrefix) { this.storagePrefix = storagePrefix; } /** * Overwrite the offset for the topic in an external storage. * * @param topic - Topic name. * @param partition - Partition of the topic. * @param offset - offset to be stored. */ void saveOffsetInExternalStore(String topic, int partition, long offset) { try { FileWriter writer = new FileWriter(storageName(topic, partition), false); BufferedWriter bufferedWriter = new BufferedWriter(writer); bufferedWriter.write(offset + ""); bufferedWriter.flush(); bufferedWriter.close(); } catch (Exception e) { e.printStackTrace(); throw new RuntimeException(e); } } /** * @return he last offset + 1 for the provided topic and partition. */ long readOffsetFromExternalStore(String topic, int partition) { try { Stream<String> stream = Files.lines(Paths.get(storageName(topic, partition))); return Long.parseLong(stream.collect(Collectors.toList()).get(0)) + 1; } catch (Exception e) { e.printStackTrace(); } return 0; } private String storageName(String topic, int partition) { return storagePrefix + "-" + topic + "-" + partition; } }
这里展示的示例是将offset存储到文件中,如果要存储到数据库中的话,需要修改offsetmanger类,将offset写入数据库。
- Kafka精确一次
- 记一次kibana精确匹配数据问题
- 开发 - kafka的一次小坑
- 记一次kafka消费能力优化
- 记一次kafka数据丢失问题的排查
- 记一次Kafka消费者拉取数据不均匀问题
- 一次kafka空间激增排查:kafka的数据压缩、批量发送等
- 从Kafka的一次broker假死介绍Kafka架构和DefaultPartitioner
- Kafka
- kafka
- kafka
- KAFKA
- Kafka
- kafka
- kafka
- kafka
- kafka
- Kafka
- Spring事务之声明式事务
- codeforces——228A —— Is your horseshoe on the other hoof?
- 自适应噪声消除器
- 排序算法总结
- rhel-server-6.6-x86_64 openssh源码升级
- Kafka精确一次
- Linux驱动开发———总线设备驱动模型
- Android EventBus源码解析 带你深入理解EventBus
- python模块
- 杂文-远程过程调用协议RPC(Remote Procedure Call Protocol)
- IJPay 让支付触手可及-文中有视频
- 无需客户端探测远端电脑已开放的TCP 端口
- java 折半查找 非递归算法 递归算法
- 计算机英语·M