kafka本地环境的搭建,以及本地java测试的调用

来源:互联网 发布:阳炎ene软件ios 编辑:程序博客网 时间:2024/05/20 07:37

kafka本地环境搭建与本地java代码测试

1、下载kafka

https://www.apache.org/dyn/closer.cgi?path=/kafka/0.8.2.1/kafka_2.9.2-0.8.2.1.tgz

2、解压

tar -zxf kafka_2.9.2-0.8.2.1.tgz


3.修改配置文件:%kafka_home/config目录下:

vim server.properties
(

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The port the socket server listens on
port=9092

# Hostname the broker will bind to. If not set, the server will bind to all interfaces
# 修改为主机ip,不然服务器返回给客户端的是主机的hostname,客户端并不一定能够识别
host.name=192.168.52.129

############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
# 日志目录
log.dirs=/var/log/kafka

#zookeeper地址和端口
zookeeper.connect=192.168.52.129:2181

(2)zookeeper配置

vim zookeeper.properties
# the directory where the snapshot is stored.
dataDir=/var/zookeeper
dataLogDir=/var/log/zookeeper
# the port at which the clients will connect
clientPort=2181
# disable the per-ip limit on the number of connections since this is a non-production config
maxClientCnxns=100

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5

4、启动zookeeper和kafka,编写了个脚本--startall.sh;

#!/bin/sh
#start zookeeper
sh zookeeper-server-start.sh ../config/zookeeper.properties > /tmp/log/zookeeper/zk-server-start.log &
#sleep 3 seconds
sleep 3

#start kafka
sh kafka-server-start.sh ../config/server.properties > /tmp/log/kafka/kafka-server-start.log &

5、使用kafka

Step 3: Create a topic(创建主题:test)

Let's create a topic named "test" with a single partition and only one replica:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
We can now see that topic if we run the list topic command:
> bin/kafka-topics.sh --list --zookeeper localhost:2181
test
Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.
Step 4: Send some messages

Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default each line will be sent as a separate message.
Run the producer and then type a few messages into the console to send to the server.

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
This is a message
This is another message
Step 5: Start a consumer

Kafka also has a command line consumer that will dump out messages to standard output.
> bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
This is a message
This is another message
心得总结:
1.produce启动的时候参数使用的是kafka的端口而consumer启动的时候使用的是zookeeper的端口;
2.必须先创建topic才能使用;
3.topic本质是以文件的形式储存在zookeeper上的。


二.JAVA编写代码测试

1.相关的依赖包

http://download.csdn.net/detail/alexander_zhou/9192011


2.生产者:

package cn.com.kafka.test;

import java.util.Properties;

import kafka.javaapi.producer.Producer;
import kafka.producer.KeyedMessage;
import kafka.producer.ProducerConfig;

public class KafkaProducer {
 private final Producer<String, String> producer;
 public final static String TOPIC = "test";

 private KafkaProducer() {
  Properties props = new Properties();
  // 此处配置的是kafka的端口
  props.put("metadata.broker.list", "192.168.52.129:9092");

  // 配置value的序列化类
  props.put("serializer.class", "kafka.serializer.StringEncoder");
  // 配置key的序列化类
  props.put("key.serializer.class", "kafka.serializer.StringEncoder");

  // request.required.acks
  // 0, which means that the producer never waits for an acknowledgement
  // from the broker (the same behavior as 0.7). This option provides the
  // lowest latency but the weakest durability guarantees (some data will
  // be lost when a server fails).
  // 1, which means that the producer gets an acknowledgement after the
  // leader replica has received the data. This option provides better
  // durability as the client waits until the server acknowledges the
  // request as successful (only messages that were written to the
  // now-dead leader but not yet replicated will be lost).
  // -1, which means that the producer gets an acknowledgement after all
  // in-sync replicas have received the data. This option provides the
  // best durability, we guarantee that no messages will be lost as long
  // as at least one in sync replica remains.
  props.put("request.required.acks", "-1");

  producer = new Producer<String, String>(new ProducerConfig(props));
 }

 void produce() {
  int messageNo = 1000;
  final int COUNT = 2000;

  while (messageNo < COUNT) {
   String key = String.valueOf(messageNo);
   String data = "hello kafka message " + key;
   producer.send(new KeyedMessage<String, String>(TOPIC, key, data));
   System.out.println(data);
   messageNo++;
  }
 }

 public static void main(String[] args) {
  new KafkaProducer().produce();
 }
}


3.消费者

package cn.com.kafka.test;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Properties;

import kafka.consumer.ConsumerConfig;
import kafka.consumer.ConsumerIterator;
import kafka.consumer.KafkaStream;
import kafka.javaapi.consumer.ConsumerConnector;
import kafka.serializer.StringDecoder;
import kafka.utils.VerifiableProperties;

public class KafkaConsumer {
 private final ConsumerConnector consumer;

 private KafkaConsumer() {
  // Properties props = new Properties();
  // // zookeeper 配置
  // props.put("zookeeper.connect", "192.168.52.129:2181");
  //
  // // group 代表一个消费组
  // props.put("group.id", "jd-group");
  //
  // // zk连接超时
  // props.put("zookeeper.session.timeout.ms", "4000");
  // props.put("zookeeper.sync.time.ms", "200");
  // props.put("auto.commit.interval.ms", "1000");
  // props.put("auto.offset.reset", "smallest");
  // // 序列化类
  // props.put("serializer.class", "kafka.serializer.StringEncoder");

  Properties props = new Properties();
  props.put("zookeeper.connect", "192.168.52.129:2181");
  props.put("group.id", "group1");
  props.put("zookeeper.session.timeout.ms", "10000");     //设置超时的时间要比较大
  props.put("zookeeper.sync.time.ms", "200");
  props.put("auto.commit.interval.ms", "1000");

  ConsumerConfig config = new ConsumerConfig(props);

  consumer = kafka.consumer.Consumer.createJavaConsumerConnector(config);
 }

 void consume() {
  Map<String, Integer> topicCountMap = new HashMap<String, Integer>();
  topicCountMap.put(KafkaProducer.TOPIC, new Integer(1));

  StringDecoder keyDecoder = new StringDecoder(new VerifiableProperties());
  StringDecoder valueDecoder = new StringDecoder(
    new VerifiableProperties());

  Map<String, List<KafkaStream<String, String>>> consumerMap = consumer
    .createMessageStreams(topicCountMap, keyDecoder, valueDecoder);
  KafkaStream<String, String> stream = consumerMap.get(
    KafkaProducer.TOPIC).get(0);
  ConsumerIterator<String, String> it = stream.iterator();
  while (it.hasNext())
   System.out.println("consumer is:"+it.next().message());
 }

 public static void main(String[] args) {
  new KafkaConsumer().consume();
 }

}


备注:连接Zookeeper时比较耗时间也说不定,于是将timeout值加大


参照网址:

http://blog.sina.com.cn/s/blog_3fe961ae01011o4z.html

http://www.open-open.com/lib/view/open1412991579999.html

http://www.zhixing123.cn/ubuntu/49444.html





1 0
原创粉丝点击