Producer指定分区

来源:互联网 发布:单页seo怎么赚钱 编辑:程序博客网 时间:2024/06/03 16:01

我们用Kafka生产者时系统会默认进行分区,但是我们也可以通过控制key值得方式让消息存放到指定的partitions中。

首先我们创建一个SimplePartitioner

package com.teamsun.kafka.m001;

 

import kafka.producer.Partitioner;

import kafka.utils.VerifiableProperties;

 

public class SimplePartitionerimplements Partitioner {

public SimplePartitioner(VerifiableProperties props) {

}

 

@Override

public int partition(Object key,int numPartitions) {

int partition = 0;

String k = (String) key;

partition = Math.abs(k.hashCode()) % numPartitions;

System.out.println(partition);

return partition;

}

}

在这个类中我们可以通过key值来控制指定分区,这里是通过hash值来控制的。

创建Kafka配置文件

package com.teamsun.kafka.m001;

public interface KafkaProperties {

 

final static StringzkConnect = "hadoop0:42182,hadoop1:42182,hadoop2:42182,hadoop3:42182";

final static StringgroupId1= "group1";

final static Stringtopic = "test3";

final static StringkafkaServerURL = "hadoop0,hadoop1,hadoop2,hadoop3";

final static int kafkaServerPort = 9092;

final static int kafkaProducerBufferSize = 64 * 1024;

final static int connectionTimeOut = 20000;

final static int reconnectInterval = 10000;

final static StringclientId = "SimpleConsumerDemoClient";

}

创建生产者

package com.teamsun.kafka.m001;

 

import java.util.Properties;

import kafka.javaapi.producer.Producer;

import kafka.producer.KeyedMessage;

import kafka.producer.ProducerConfig;

 

public class PartitionerProducer {

 

public static void main(String[] args) {

Properties props = new Properties();

props.put("serializer.class", "kafka.serializer.StringEncoder");

props.put("metadata.broker.list",

"hadoop0:9092,hadoop1:9092,hadoop2:9092,hadoop3:9092");

props.put("partitioner.class", "com.teamsun.kafka.m001.SimplePartitioner");

props.put("request.required.acks", "1");

Producer<String, String> producer = new Producer<String, String>(

new ProducerConfig(props));

String topic = "test3";

for (int i = 0; i <= 1000000; i++) {

String k = "key" + i;

String v = k + "--value" + i;

producer.send(new KeyedMessage<String, String>(topic, k, v));

System.out.println(k+v);

}

producer.close();

}

}

这里注意红色部分引用之前创建的SimplePartitioner

创建消费者

package com.teamsun.kafka.m001;

 

import java.util.HashMap;

import java.util.List;

import java.util.Map;

import java.util.Properties;

 

import com.teamsun.kafka.m001.KafkaProperties;

 

import kafka.consumer.ConsumerConfig;

import kafka.consumer.ConsumerIterator;

import kafka.consumer.KafkaStream;

import kafka.javaapi.consumer.ConsumerConnector;

 

public class KafkaConsumer1extends Thread {

private final ConsumerConnectorconsumer;

private final Stringtopic;

 

public KafkaConsumer1(String topic) {

consumer = kafka.consumer.Consumer

.createJavaConsumerConnector(createConsumerConfig());

this.topic = topic;

}

 

private static ConsumerConfig createConsumerConfig() {

Properties props = new Properties();

props.put("zookeeper.connect", KafkaProperties.zkConnect);

props.put("group.id", KafkaProperties.groupId1);

props.put("zookeeper.session.timeout.ms","40000");

props.put("zookeeper.sync.time.ms", "200");

props.put("auto.commit.interval.ms", "1000");

return new ConsumerConfig(props);

}

 

@Override

public void run() {

Map<String, Integer> topicCountMap = new HashMap<String, Integer>();

topicCountMap.put(topic, new Integer(1));

Map<String, List<KafkaStream<byte[],byte[]>>> consumerMap =consumer

.createMessageStreams(topicCountMap);

KafkaStream<byte[], byte[]> stream = consumerMap.get(topic).get(0);

ConsumerIterator<byte[],byte[]> it = stream.iterator();

while (it.hasNext()) {

System.out.println("1receive:" +new String(it.next().message()));

//try {

////sleep(300);      // 每条消息延迟300ms

//} catch (InterruptedException e) {

//e.printStackTrace();

//}

}

}

}

consumer同时消费时指定相同组即可同时消费消息。

1 0
原创粉丝点击