Kafka环境搭建及与Spring的整合测试

来源:互联网 发布:mysql employees 导入 编辑:程序博客网 时间:2024/05/16 06:53

【前言】 Kafka作为一种分布式消息队列的实现,采用发布订阅的消息模型。通过生产者把消息Topic存储在broke,消息者可以订阅多个感兴趣的topic进行消费。消费者需要自己保留一个offset,从kafka 获取消息时,只拉去当前offset 以后的消息。

Kafka相比于其他MQ(activemq、rabbitmq)优点:

1.高性能高吞吐量 ;  Kafka 集群可以透明的扩展,增加新的服务器进集群。
高性能。Kafka 的性能大大超过传统的ActiveMQ、RabbitMQ等MQ 实现,尤其是Kafka 还支持batch 操作。

2.容错能力。kafka把每个partition的数据复制到几台服务器上,一台broke 出故障时,zookeeper服务将通知生产者和消费者,从而使用其他的broke节点。

【环境搭建】Linux centos下安装zookeeper-3.3.6,bin目录下启动服务

 ./zkServer.sh start 

,安装kafka_2.10-0.10.0.1 ,bin目录下启动服务【注:kafka客户端jar包版本需要和服务端版本保持一致,如客户端为kafka-client-0.10需要和本次安装的服务端版本保持一致】
./kafka-server-start.sh  ../config/server.properties 
【程序测试】项目目录:


pom.xml

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0"         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">    <modelVersion>4.0.0</modelVersion>    <groupId>org.kafka</groupId>    <artifactId>kafkaDemo</artifactId>    <version>1.0-SNAPSHOT</version>    <properties>        <spring.version>4.3.3.RELEASE</spring.version>        <slf4j.version>1.7.5</slf4j.version>    </properties>    <dependencies>        <dependency>            <groupId>org.springframework</groupId>            <artifactId>spring-beans</artifactId>            <version>${spring.version}</version>        </dependency>        <dependency>            <groupId>org.springframework</groupId>            <artifactId>spring-context</artifactId>            <version>${spring.version}</version>        </dependency>        <dependency>            <groupId>org.springframework</groupId>            <artifactId>spring-core</artifactId>            <version>${spring.version}</version>        </dependency>        <dependency>            <groupId>org.springframework</groupId>            <artifactId>spring-tx</artifactId>            <version>${spring.version}</version>        </dependency>        <dependency>            <groupId>org.slf4j</groupId>            <artifactId>slf4j-log4j12</artifactId>            <version>${slf4j.version}</version>        </dependency>        <dependency>            <groupId>org.springframework.kafka</groupId>            <artifactId>spring-kafka</artifactId>            <version>1.1.1.RELEASE</version>        </dependency>        <dependency>            <groupId>org.springframework.integration</groupId>            <artifactId>spring-integration-kafka</artifactId>            <version>2.1.0.RELEASE</version>        </dependency>        <dependency>            <groupId>log4j</groupId>            <artifactId>log4j</artifactId>            <version>1.2.15</version>            <exclusions>                <exclusion>                    <artifactId>jmxtools</artifactId>                    <groupId>com.sun.jdmk</groupId>                </exclusion>                <exclusion>                    <artifactId>jmxri</artifactId>                    <groupId>com.sun.jmx</groupId>                </exclusion>                <exclusion>                    <artifactId>jms</artifactId>                    <groupId>javax.jms</groupId>                </exclusion>                <exclusion>                    <artifactId>mail</artifactId>                    <groupId>javax.mail</groupId>                </exclusion>            </exclusions>        </dependency>        <dependency>            <groupId>junit</groupId>            <artifactId>junit</artifactId>            <version>4.12</version>            <scope>test</scope>        </dependency>        <dependency>            <groupId>org.springframework</groupId>            <artifactId>spring-test</artifactId>            <version>${spring.version}</version>            <scope>test</scope>        </dependency>        <dependency>            <groupId>com.101tec</groupId>            <artifactId>zkclient</artifactId>            <version>0.4</version>        </dependency>        <dependency>            <groupId>org.apache.zookeeper</groupId>            <artifactId>zookeeper</artifactId>            <version>3.3.2</version>            <exclusions>                <exclusion>                    <artifactId>jmxri</artifactId>                    <groupId>com.sun.jmx</groupId>                </exclusion>                <exclusion>                    <artifactId>jmxtools</artifactId>                    <groupId>com.sun.jdmk</groupId>                </exclusion>            </exclusions>        </dependency>        <dependency>            <groupId>org.springframework</groupId>            <artifactId>spring-context-support</artifactId>            <version>3.2.6.RELEASE</version>        </dependency>        <dependency>            <groupId>com.alibaba</groupId>            <artifactId>fastjson</artifactId>            <version>1.2.12</version>        </dependency>    </dependencies>    <build>        <resources>            <resource>                <directory>src/main/resources</directory>                <excludes>                    <exclude>*.xml</exclude>                    <exclude>spring/*.xml</exclude>                    <exclude>conf*/*</exclude>                </excludes>            </resource>        </resources>        <plugins>            <plugin>                <artifactId>maven-assembly-plugin</artifactId>                <configuration><!--描述执行的文件路径-->                    <descriptor>src/main/assembly/assembly.xml</descriptor>                    <appendAssemblyId>false</appendAssemblyId>                    <finalName>kafkaDemo</finalName>                </configuration>                <executions><!--执行器,执行器的名称,绑定到package的生命周期上,single表示只执行一次-->                    <execution>                        <id>make-assembly</id>                        <phase>package</phase>                        <goals>                            <goal>single</goal>                        </goals>                    </execution>                </executions>            </plugin>        </plugins>    </build></project>

producerSerivce接口:

import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.kafka.core.KafkaTemplate;public class ProducerService {    @Autowired    private KafkaTemplate<String,String> kafkaTemplate;    protected Logger logger  = LoggerFactory.getLogger(this.getClass());    public void sendMessage(String msg){        logger.info("----------------进入sendMessage------------");        kafkaTemplate.sendDefault(msg);        logger.info("----------------sendMessage完成------------");    }    public KafkaTemplate<String, String> getKafkaTemplate() {        return kafkaTemplate;    }    public void setKafkaTemplate(KafkaTemplate<String, String> kafkaTemplate) {        this.kafkaTemplate = kafkaTemplate;    }}
msgProducer:实际的生产者

import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.beans.factory.annotation.Autowired;public class MsgProducer {    Logger logger  = LoggerFactory.getLogger(this.getClass());    @Autowired    private ProducerService producerService;    public void sendMsg(String msg){        logger.info("----------生产者发送一条消息-----------");        logger.info("----------消息内容:"+msg);        producerService.sendMessage(msg);    }    public ProducerService getProducerService() {        return producerService;    }    public void setProducerService(ProducerService producerService) {        this.producerService = producerService;    }}

pojo:

public class Person implements Serializable{        public String name;    public int age;    public Person(){}    public Person(String name,int age){        this.name = name;        this.age = age;    }    public String getName() {        return name;    }    public void setName(String name) {        this.name = name;    }    public int getAge() {        return age;    }    public void setAge(int age) {        this.age = age;    }    @Override    public String toString() {        return "Person{" +                "name='" + name + '\'' +                ", age=" + age +                '}';    }}
消费者:

import com.alibaba.fastjson.JSON;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.slf4j.Logger;import org.slf4j.LoggerFactory;import org.springframework.kafka.listener.MessageListener;import pojo.Person;import java.util.concurrent.ExecutorService;import java.util.concurrent.Executors;public class ConSumerService implements MessageListener<String,String> {    ExecutorService executorService = Executors.newFixedThreadPool(2);     Logger logger  = LoggerFactory.getLogger(this.getClass());    @Override    public void onMessage(final ConsumerRecord<String, String> stringStringConsumerRecord) {        if(stringStringConsumerRecord == null){            logger.info("record is null");            return;        }        executorService.execute(new Runnable() {            @Override            public void run() {                try {                    logger.info("收到一条消息"+stringStringConsumerRecord.toString());                   Person person =  JSON.parseObject(stringStringConsumerRecord.value(), Person.class);                    System.out.println(person);                } catch (Exception e) {                    e.printStackTrace();                }            }        });    }}
Spring :applicationContext.xml
<?xml version="1.0" encoding="UTF-8"?><!-- - Application context definition for JPetStore's business layer. - Contains bean references to the transaction manager   and to the DAOs in - dataAccessContext-local/jta.xml (see web.xml's "contextConfigLocation"). --><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  xmlns:util="http://www.springframework.org/schema/util" xmlns:context="http://www.springframework.org/schema/context"  xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsd   http://www.springframework.org/schema/context           http://www.springframework.org/schema/context/spring-context-3.2.xsd">  <!-- ========================= GENERAL DEFINITIONS ========================= -->  <context:annotation-config />  <context:component-scan base-package="Producer"/>  <context:component-scan base-package="Consumer"/>  <context:property-placeholder location="classpath:conf/*.properties"/>  <import resource="classpath:spring/kafka-*.xml" /></beans>

kafka-producer.xml:

<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"       xmlns:int="http://www.springframework.org/schema/integration"       xsi:schemaLocation="       http://www.springframework.org/schema/beans       http://www.springframework.org/schema/beans/spring-beans.xsd       http://www.springframework.org/schema/integration       http://www.springframework.org/schema/integration/spring-integration.xsd       http://www.springframework.org/schema/integration/kafka       http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd       ">    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">        <constructor-arg>            <bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">                <constructor-arg>                    <map>                        <entry key="bootstrap.servers" value="${kafka.producer.bootstrap.servers}"/>                        <entry key="producer.type" value="${producer.type}"/>                        <entry key="group.id" value="${kafka.producer.group.id}"/>                        <entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>                        <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>                    </map>                </constructor-arg>            </bean>        </constructor-arg>        <constructor-arg name="autoFlush" value="true"/>        <property name="defaultTopic" value="testTopic"/>    </bean>    <bean id="kafkaProducerService" class="Producer.ProducerService"/></beans>

kafka-consumer.xml:

<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans"       xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xmlns:int-kafka="http://www.springframework.org/schema/integration/kafka"       xmlns:int="http://www.springframework.org/schema/integration"       xsi:schemaLocation="       http://www.springframework.org/schema/beans       http://www.springframework.org/schema/beans/spring-beans.xsd       http://www.springframework.org/schema/integration       http://www.springframework.org/schema/integration/spring-integration.xsd       http://www.springframework.org/schema/integration/kafka       http://www.springframework.org/schema/integration/kafka/spring-integration-kafka.xsd       ">    <bean id="kafkaTemplate" class="org.springframework.kafka.core.KafkaTemplate">        <constructor-arg>            <bean class="org.springframework.kafka.core.DefaultKafkaProducerFactory">                <constructor-arg>                    <map>                        <entry key="bootstrap.servers" value="${kafka.producer.bootstrap.servers}"/>                        <entry key="producer.type" value="${producer.type}"/>                        <entry key="group.id" value="${kafka.producer.group.id}"/>                        <entry key="key.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>                        <entry key="value.serializer" value="org.apache.kafka.common.serialization.StringSerializer"/>                    </map>                </constructor-arg>            </bean>        </constructor-arg>        <constructor-arg name="autoFlush" value="true"/>        <property name="defaultTopic" value="testTopic"/>    </bean>    <bean id="kafkaProducerService" class="Producer.ProducerService"/></beans>

kafka-service.xml

<?xml version="1.0" encoding="UTF-8"?><beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"       xmlns:context="http://www.springframework.org/schema/context" xmlns:util="http://www.springframework.org/schema/util"       xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.2.xsdhttp://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.2.xsd        http://www.springframework.org/schema/util http://www.springframework.org/schema/util/spring-util.xsd">    <bean id="msgProducer" class="Producer.MsgProducer"/></beans>


属性配置文件:

consumer.properties:

zookeeper.connect=127.0.0.1:2181kafka.consumer.bootstrap.servers=127.0.0.1:9092##,127.0.0.1:2182,127.0.0.1:2183# timeout in ms for connecting to zookeeperzookeeper.connectiontimeout.ms=1000000kafka.consumer.group.id=0kafka.consumer.key.deserializer=org.apache.kafka.common.serialization.StringDeserializerkafka.consumer.value.deserializer=org.apache.kafka.common.serialization.StringDeserializerauto.commit.enable=trueauto.commit.interval.ms=60000
log4j.properties:

log4j.rootLogger=INFO,stdoutlog4j.appender.stdout=org.apache.log4j.ConsoleAppenderlog4j.appender.stdout.layout=org.apache.log4j.PatternLayoutlog4j.appender.stdout.layout.ConversionPattern=%d %p [%c] - %m%n

producer.properties:

##broker列表可以为kafka server的子集,因为producer需要从broker中获取metadata##尽管每个broker都可以提供metadata,此处还是建议,将所有broker都列举出来kafka.producer.bootstrap.servers=127.0.0.1:9092##,127.0.0.1:9093##async  异步producer.type=asynccompression.codec=0kafka.producer.group.id=0cronExpression=0 0 1 * * ?##在producer.type=async时有效#batch.num.messages=100

测试类KafkaTest:

import Producer.MsgProducer;import com.alibaba.fastjson.JSON;import com.alibaba.fastjson.serializer.SerializerFeature;import org.springframework.context.support.ClassPathXmlApplicationContext;import pojo.Person;public class KafkaTest { //   private  MsgProducer msgProducer;    public static void main(String[] args){        ClassPathXmlApplicationContext applicationContext =                new ClassPathXmlApplicationContext("classpath:spring/applicationContext.xml");        MsgProducer msgProducer = applicationContext.getBean("msgProducer",MsgProducer.class);        for(int i = 0;i < 10;i++){            Person  person = new Person("dh",i+1);            msgProducer.sendMsg(JSON.toJSONString(person, SerializerFeature.BrowserCompatible,SerializerFeature.WriteClassName));        }    }}


【测试结果】

2016-12-14 10:07:27,809 INFO [org.springframework.context.support.ClassPathXmlApplicationContext] - Refreshing org.springframework.context.support.ClassPathXmlApplicationContext@37ada1e0: startup date [Wed Dec 14 10:07:27 CST 2016]; root of context hierarchy2016-12-14 10:07:27,903 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from class path resource [applicationContext.xml]2016-12-14 10:07:28,750 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from file [/home/hd/kafkaDemo/test/kafka-consumer.xml]2016-12-14 10:07:28,909 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from file [/home/hd/kafkaDemo/test/kafka-producer.xml]2016-12-14 10:07:29,016 INFO [org.springframework.beans.factory.xml.XmlBeanDefinitionReader] - Loading XML bean definitions from file [/home/hd/kafkaDemo/test/kafka-service.xml]2016-12-14 10:07:29,244 INFO [org.springframework.context.support.PropertySourcesPlaceholderConfigurer] - Loading properties file from file [/home/hd/kafkaDemo/test/consumer.properties]2016-12-14 10:07:29,244 INFO [org.springframework.context.support.PropertySourcesPlaceholderConfigurer] - Loading properties file from file [/home/hd/kafkaDemo/test/log4j.properties]2016-12-14 10:07:29,245 INFO [org.springframework.context.support.PropertySourcesPlaceholderConfigurer] - Loading properties file from file [/home/hd/kafkaDemo/test/producer.properties]2016-12-14 10:07:29,701 INFO [org.apache.kafka.clients.consumer.ConsumerConfig] - ConsumerConfig values: interceptor.classes = nullrequest.timeout.ms = 40000check.crcs = truessl.truststore.password = nullretry.backoff.ms = 100ssl.keymanager.algorithm = SunX509receive.buffer.bytes = 65536ssl.key.password = nullssl.cipher.suites = nullsasl.kerberos.ticket.renew.jitter = 0.05sasl.kerberos.service.name = nullssl.provider = nullsession.timeout.ms = 30000sasl.kerberos.ticket.renew.window.factor = 0.8sasl.mechanism = GSSAPImax.poll.records = 2147483647bootstrap.servers = [localhost:9092]client.id = fetch.max.wait.ms = 500fetch.min.bytes = 1key.deserializer = class org.apache.kafka.common.serialization.StringDeserializerauto.offset.reset = latestvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializersasl.kerberos.kinit.cmd = /usr/bin/kinitssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]max.partition.fetch.bytes = 1048576partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]ssl.endpoint.identification.algorithm = nullssl.keystore.location = nullssl.truststore.location = nullexclude.internal.topics = truessl.keystore.password = nullmetrics.sample.window.ms = 30000security.protocol = PLAINTEXTmetadata.max.age.ms = 300000auto.commit.interval.ms = 60000ssl.protocol = TLSsasl.kerberos.min.time.before.relogin = 60000connections.max.idle.ms = 540000ssl.trustmanager.algorithm = PKIXgroup.id = 0enable.auto.commit = truemetric.reporters = []ssl.truststore.type = JKSsend.buffer.bytes = 131072reconnect.backoff.ms = 50metrics.num.samples = 2ssl.keystore.type = JKSheartbeat.interval.ms = 30002016-12-14 10:07:30,112 INFO [org.apache.kafka.clients.consumer.ConsumerConfig] - ConsumerConfig values: interceptor.classes = nullrequest.timeout.ms = 40000check.crcs = truessl.truststore.password = nullretry.backoff.ms = 100ssl.keymanager.algorithm = SunX509receive.buffer.bytes = 65536ssl.key.password = nullssl.cipher.suites = nullsasl.kerberos.ticket.renew.jitter = 0.05sasl.kerberos.service.name = nullssl.provider = nullsession.timeout.ms = 30000sasl.kerberos.ticket.renew.window.factor = 0.8sasl.mechanism = GSSAPImax.poll.records = 2147483647bootstrap.servers = [localhost:9092]client.id = consumer-1fetch.max.wait.ms = 500fetch.min.bytes = 1key.deserializer = class org.apache.kafka.common.serialization.StringDeserializerauto.offset.reset = latestvalue.deserializer = class org.apache.kafka.common.serialization.StringDeserializersasl.kerberos.kinit.cmd = /usr/bin/kinitssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]max.partition.fetch.bytes = 1048576partition.assignment.strategy = [org.apache.kafka.clients.consumer.RangeAssignor]ssl.endpoint.identification.algorithm = nullssl.keystore.location = nullssl.truststore.location = nullexclude.internal.topics = truessl.keystore.password = nullmetrics.sample.window.ms = 30000security.protocol = PLAINTEXTmetadata.max.age.ms = 300000auto.commit.interval.ms = 60000ssl.protocol = TLSsasl.kerberos.min.time.before.relogin = 60000connections.max.idle.ms = 540000ssl.trustmanager.algorithm = PKIXgroup.id = 0enable.auto.commit = truemetric.reporters = []ssl.truststore.type = JKSsend.buffer.bytes = 131072reconnect.backoff.ms = 50metrics.num.samples = 2ssl.keystore.type = JKSheartbeat.interval.ms = 30002016-12-14 10:07:30,150 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka version : 0.10.0.12016-12-14 10:07:30,150 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka commitId : a7a17cdec9eaa6c52016-12-14 10:07:30,405 INFO [org.springframework.context.support.DefaultLifecycleProcessor] - Starting beans in phase 02016-12-14 10:07:30,628 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:30,628 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":1,"name":"dh"}2016-12-14 10:07:30,628 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:30,634 INFO [org.apache.kafka.clients.producer.ProducerConfig] - ProducerConfig values: interceptor.classes = nullrequest.timeout.ms = 30000ssl.truststore.password = nullretry.backoff.ms = 100buffer.memory = 33554432batch.size = 16384ssl.keymanager.algorithm = SunX509receive.buffer.bytes = 32768ssl.key.password = nullssl.cipher.suites = nullsasl.kerberos.ticket.renew.jitter = 0.05sasl.kerberos.service.name = nullssl.provider = nullmax.in.flight.requests.per.connection = 5sasl.kerberos.ticket.renew.window.factor = 0.8sasl.mechanism = GSSAPIbootstrap.servers = [localhost:9092]client.id = max.request.size = 1048576acks = 1linger.ms = 0sasl.kerberos.kinit.cmd = /usr/bin/kinitssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]metadata.fetch.timeout.ms = 60000ssl.endpoint.identification.algorithm = nullssl.keystore.location = nullvalue.serializer = class org.apache.kafka.common.serialization.StringSerializerssl.truststore.location = nullssl.keystore.password = nullblock.on.buffer.full = falsekey.serializer = class org.apache.kafka.common.serialization.StringSerializermetrics.sample.window.ms = 30000security.protocol = PLAINTEXTmetadata.max.age.ms = 300000ssl.protocol = TLSsasl.kerberos.min.time.before.relogin = 60000timeout.ms = 30000connections.max.idle.ms = 540000ssl.trustmanager.algorithm = PKIXmetric.reporters = []ssl.truststore.type = JKScompression.type = noneretries = 0max.block.ms = 60000partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitionersend.buffer.bytes = 131072reconnect.backoff.ms = 50metrics.num.samples = 2ssl.keystore.type = JKS2016-12-14 10:07:30,692 INFO [org.apache.kafka.clients.producer.ProducerConfig] - ProducerConfig values: interceptor.classes = nullrequest.timeout.ms = 30000ssl.truststore.password = nullretry.backoff.ms = 100buffer.memory = 33554432batch.size = 16384ssl.keymanager.algorithm = SunX509receive.buffer.bytes = 32768ssl.key.password = nullssl.cipher.suites = nullsasl.kerberos.ticket.renew.jitter = 0.05sasl.kerberos.service.name = nullssl.provider = nullmax.in.flight.requests.per.connection = 5sasl.kerberos.ticket.renew.window.factor = 0.8sasl.mechanism = GSSAPIbootstrap.servers = [localhost:9092]client.id = producer-1max.request.size = 1048576acks = 1linger.ms = 0sasl.kerberos.kinit.cmd = /usr/bin/kinitssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]metadata.fetch.timeout.ms = 60000ssl.endpoint.identification.algorithm = nullssl.keystore.location = nullvalue.serializer = class org.apache.kafka.common.serialization.StringSerializerssl.truststore.location = nullssl.keystore.password = nullblock.on.buffer.full = falsekey.serializer = class org.apache.kafka.common.serialization.StringSerializermetrics.sample.window.ms = 30000security.protocol = PLAINTEXTmetadata.max.age.ms = 300000ssl.protocol = TLSsasl.kerberos.min.time.before.relogin = 60000timeout.ms = 30000connections.max.idle.ms = 540000ssl.trustmanager.algorithm = PKIXmetric.reporters = []ssl.truststore.type = JKScompression.type = noneretries = 0max.block.ms = 60000partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitionersend.buffer.bytes = 131072reconnect.backoff.ms = 50metrics.num.samples = 2ssl.keystore.type = JKS2016-12-14 10:07:30,695 WARN [org.apache.kafka.clients.producer.ProducerConfig] - The configuration group.id = 0 was supplied but isn't a known config.2016-12-14 10:07:30,695 WARN [org.apache.kafka.clients.producer.ProducerConfig] - The configuration producer.type = async was supplied but isn't a known config.2016-12-14 10:07:30,695 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka version : 0.10.0.12016-12-14 10:07:30,695 INFO [org.apache.kafka.common.utils.AppInfoParser] - Kafka commitId : a7a17cdec9eaa6c52016-12-14 10:07:30,873 INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] - Discovered coordinator localhost:9092 (id: 2147483647 rack: null) for group 0.2016-12-14 10:07:30,873 INFO [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Revoking previously assigned partitions [] for group 02016-12-14 10:07:30,873 INFO [org.springframework.kafka.listener.KafkaMessageListenerContainer] - partitions revoked:[]2016-12-14 10:07:30,873 INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] - (Re-)joining group 02016-12-14 10:07:31,299 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,299 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,299 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":2,"name":"dh"}2016-12-14 10:07:31,299 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,310 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,310 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,310 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":3,"name":"dh"}2016-12-14 10:07:31,310 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,317 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,317 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,317 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":4,"name":"dh"}2016-12-14 10:07:31,317 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,324 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,324 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,324 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":5,"name":"dh"}2016-12-14 10:07:31,324 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,332 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,332 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,332 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":6,"name":"dh"}2016-12-14 10:07:31,332 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,348 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,348 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,349 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":7,"name":"dh"}2016-12-14 10:07:31,349 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,360 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,360 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,360 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":8,"name":"dh"}2016-12-14 10:07:31,360 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,363 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,363 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,363 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":9,"name":"dh"}2016-12-14 10:07:31,363 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,373 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:31,373 INFO [Producer.MsgProducer] - ----------生产者发送一条消息-----------2016-12-14 10:07:31,373 INFO [Producer.MsgProducer] - ----------消息内容:{"@type":"pojo.Person","age":10,"name":"dh"}2016-12-14 10:07:31,373 INFO [Producer.ProducerService] - ----------------进入sendMessage------------2016-12-14 10:07:31,377 INFO [Producer.ProducerService] - ----------------sendMessage完成------------2016-12-14 10:07:32,049 INFO [org.apache.kafka.clients.consumer.internals.AbstractCoordinator] - Successfully joined group 0 with generation 12016-12-14 10:07:32,051 INFO [org.apache.kafka.clients.consumer.internals.ConsumerCoordinator] - Setting newly assigned partitions [testTopic-0] for group 02016-12-14 10:07:32,051 INFO [org.springframework.kafka.listener.KafkaMessageListenerContainer] - partitions assigned:[testTopic-0]2016-12-14 10:07:32,501 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 11, CreateTime = 1481681250787, checksum = 1610398819, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":1,"name":"dh"})2016-12-14 10:07:32,516 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 12, CreateTime = 1481681251299, checksum = 3727489338, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":2,"name":"dh"})Person{name='dh', age=1}2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 13, CreateTime = 1481681251310, checksum = 2921922270, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":3,"name":"dh"})Person{name='dh', age=3}2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 14, CreateTime = 1481681251317, checksum = 2739889014, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":4,"name":"dh"})Person{name='dh', age=4}2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 15, CreateTime = 1481681251324, checksum = 2936640231, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":5,"name":"dh"})Person{name='dh', age=5}2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 16, CreateTime = 1481681251333, checksum = 3197654111, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":6,"name":"dh"})Person{name='dh', age=6}2016-12-14 10:07:32,524 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 17, CreateTime = 1481681251349, checksum = 166554300, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":7,"name":"dh"})Person{name='dh', age=7}2016-12-14 10:07:32,525 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 18, CreateTime = 1481681251360, checksum = 880325773, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":8,"name":"dh"})Person{name='dh', age=8}2016-12-14 10:07:32,525 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 19, CreateTime = 1481681251363, checksum = 321016300, serialized key size = -1, serialized value size = 43, key = null, value = {"@type":"pojo.Person","age":9,"name":"dh"})Person{name='dh', age=9}2016-12-14 10:07:32,525 INFO [Consumer.ConSumerService] - 收到一条消息ConsumerRecord(topic = testTopic, partition = 0, offset = 20, CreateTime = 1481681251373, checksum = 795242757, serialized key size = -1, serialized value size = 44, key = null, value = {"@type":"pojo.Person","age":10,"name":"dh"})Person{name='dh', age=10}Person{name='dh', age=2}

【总结】在访问量剧增的情况下,应用仍然需要继续发挥作用,但是这样的突发流量并不常见;如果为以能处理这类峰值访问为标准来投入资源随时待命无疑是巨大的浪费。使用消息队列能够使应用顶住突发的访问压力,而不会因为突发的超负荷的请求而完全崩溃。即将高并发产生的请求信息放至消息队列中,以削平高峰时的并发事务,从而提高系统的处理性能。

【附录】kafka深度解析:点击打开链接


zookeeper原理:点击打开链接

0 0
原创粉丝点击