Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常?
来源:互联网 发布:通勤自行车 知乎 编辑:程序博客网 时间:2024/06/04 19:02
前几天遇到一个bug,查看发送日志发现java.io.IOException: Broken pipe的错误,通过深入了解发现当kafka producer发送的消息体大于Broker配置的默认值时就会报这个异常。如果仅发送一次是不会报这个异常的,要连续发送才会报这个异常。
本博文记录一下当Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常。
Kafka Broker Configs中有一个参数:message.max.bytes——用来指定消息的大小。
当Producer向Broker发送一个比Kafka Broker配置的阈值还要大的一个消息时,Producer端和Broker端会有什么异常情况。
Producer端测试代码:
public class Producer { public static final String brokerList = "10.198.197.59:9092"; public static final String topic = "versionTopic"; public static void main(String[] args) { Properties properties = new Properties(); properties.put("serializer.class", "kafka.serializer.StringEncoder"); properties.put("metadata.broker.list", brokerList); ProducerConfig config = new ProducerConfig(properties); kafka.javaapi.producer.Producer producer = new kafka.javaapi.producer.Producer<Integer, String>(config); String message = getMessage(1 * 1024 * 1024); for(int i=0;i<3;i++) { KeyedMessage<Integer, String> keyedMessage = new KeyedMessage<Integer, String>(topic, message); producer.send(keyedMessage); System.out.println("============================="); } try { TimeUnit.SECONDS.sleep(50); } catch (InterruptedException e) { e.printStackTrace(); } } public static String getMessage(int msgSize) { StringBuilder stringBuilder = new StringBuilder(); for(int i=0;i<msgSize;i++) { stringBuilder.append("x"); } return stringBuilder.toString(); }}
Producer端输出:
2017-02-28 16:19:31 -[INFO] - [Verifying properties] - [kafka.utils.Logging$class:68]2017-02-28 16:19:31 -[INFO] - [Property metadata.broker.list is overridden to 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:31 -[INFO] - [Property serializer.class is overridden to kafka.serializer.StringEncoder] - [kafka.utils.Logging$class:68]2017-02-28 16:19:31 -[INFO] - [Fetching metadata from broker id:0,host:10.198.197.59,port:9092 with correlation id 0 for 1 topic(s) Set(versionTopic)] - [kafka.utils.Logging$class:68]2017-02-28 16:19:31 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]2017-02-28 16:19:31 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:31 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]=============================2017-02-28 16:19:34 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:34 -[WARN] - [Failed to send producer request with correlation id 4 to broker 0 with data for partitions [versionTopic,0]] - [kafka.utils.Logging$class:89]java.io.IOException: 你的主机中的软件中止了一个已建立的连接。(ps:如果没有中文,这里会出现“java.io.IOException: Broken pipe”的报错。) at sun.nio.ch.SocketDispatcher.writev0(Native Method) at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:55) at sun.nio.ch.IOUtil.write(IOUtil.java:148) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504) at java.nio.channels.SocketChannel.write(SocketChannel.java:502) at kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:56) at kafka.network.Send$class.writeCompletely(Transmission.scala:75) at kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend.scala:26) at kafka.network.BlockingChannel.send(BlockingChannel.scala:103) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102) at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.producer.SyncProducer.send(SyncProducer.scala:101) at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255) at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafka.Producer.main(Producer.java:30)2017-02-28 16:19:34 -[INFO] - [Back off for 100 ms before retrying send. Remaining retries = 3] - [kafka.utils.Logging$class:68]2017-02-28 16:19:34 -[INFO] - [Fetching metadata from broker id:0,host:10.198.197.59,port:9092 with correlation id 5 for 1 topic(s) Set(versionTopic)] - [kafka.utils.Logging$class:68]2017-02-28 16:19:34 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]2017-02-28 16:19:34 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:34 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:34 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]=============================2017-02-28 16:19:38 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:38 -[WARN] - [Failed to send producer request with correlation id 9 to broker 0 with data for partitions [versionTopic,0]] - [kafka.utils.Logging$class:89]java.io.IOException: 你的主机中的软件中止了一个已建立的连接。(ps:如果没有中文,这里会出现“java.io.IOException: Broken pipe”的报错。) at sun.nio.ch.SocketDispatcher.writev0(Native Method) at sun.nio.ch.SocketDispatcher.writev(SocketDispatcher.java:55) at sun.nio.ch.IOUtil.write(IOUtil.java:148) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:504) at java.nio.channels.SocketChannel.write(SocketChannel.java:502) at kafka.network.BoundedByteBufferSend.writeTo(BoundedByteBufferSend.scala:56) at kafka.network.Send$class.writeCompletely(Transmission.scala:75) at kafka.network.BoundedByteBufferSend.writeCompletely(BoundedByteBufferSend.scala:26) at kafka.network.BlockingChannel.send(BlockingChannel.scala:103) at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73) at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.producer.SyncProducer$$anonfun$send$1$$anonfun$apply$mcV$sp$1.apply(SyncProducer.scala:103)at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33)at kafka.producer.SyncProducer$$anonfun$send$1.apply$mcV$sp(SyncProducer.scala:102) at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102)at kafka.producer.SyncProducer$$anonfun$send$1.apply(SyncProducer.scala:102) at kafka.metrics.KafkaTimer.time(KafkaTimer.scala:33) at kafka.producer.SyncProducer.send(SyncProducer.scala:101) at kafka.producer.async.DefaultEventHandler.kafka$producer$async$DefaultEventHandler$$send(DefaultEventHandler.scala:255) at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:106)at kafka.producer.async.DefaultEventHandler$$anonfun$dispatchSerializedData$2.apply(DefaultEventHandler.scala:100) at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:772) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:771) at kafka.producer.async.DefaultEventHandler.dispatchSerializedData(DefaultEventHandler.scala:100) at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:72) at kafka.producer.Producer.send(Producer.scala:77) at kafka.javaapi.producer.Producer.send(Producer.scala:33) at com.kafka.Producer.main(Producer.java:30)2017-02-28 16:19:38 -[INFO] - [Back off for 100 ms before retrying send. Remaining retries = 3] - [kafka.utils.Logging$class:68]2017-02-28 16:19:38 -[INFO] - [Fetching metadata from broker id:0,host:10.198.197.59,port:9092 with correlation id 10 for 1 topic(s) Set(versionTopic)] - [kafka.utils.Logging$class:68]2017-02-28 16:19:38 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]2017-02-28 16:19:38 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:38 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-02-28 16:19:38 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]=============================
注意输出中的:java.io.IOException: 你的主机中的软件中止了一个已建立的连接。(ps:如果没有中文,这里会出现“java.io.IOException: Broken pipe”的报错。)
而Broker端会有报错:
[2017-02-28 16:04:03,384] INFO Closing socket connection to /10.101.48.240. (kafka.network.Processor)[2017-02-28 16:04:06,466] ERROR [KafkaApi-0] Error processing ProducerRequest with correlation id 2 from client on partition [versionTopic,0] (kafka.server.KafkaApis)kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012. at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378)at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32) at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361) at kafka.log.Log.append(Log.scala:257) at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365) at kafka.utils.Utils$.inLock(Utils.scala:535) at kafka.utils.Utils$.inReadLock(Utils.scala:541) at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365) at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:282) at kafka.server.KafkaApis.handleProducerOrOffsetCommitRequest(KafkaApis.scala:204) at kafka.server.KafkaApis.handle(KafkaApis.scala:59) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59) at java.lang.Thread.run(Thread.java:745)[2017-02-28 16:04:06,467] INFO [KafkaApi-0] Send the close connection response due to error handling produce request [clientId = , correlationId = 2, topicAndPartition = [versionTopic,0]] with Ack=0 (kafka.server.KafkaApis)[2017-02-28 16:04:06,629] INFO Closing socket connection to /10.101.48.240. (kafka.network.Processor)[2017-02-28 16:04:09,921] ERROR [KafkaApi-0] Error processing ProducerRequest with correlation id 7 from client on partition [versionTopic,0] (kafka.server.KafkaApis)kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012. at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378)at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32) at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361) at kafka.log.Log.append(Log.scala:257) at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365) at kafka.utils.Utils$.inLock(Utils.scala:535) at kafka.utils.Utils$.inReadLock(Utils.scala:541) at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365) at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:282) at kafka.server.KafkaApis.handleProducerOrOffsetCommitRequest(KafkaApis.scala:204) at kafka.server.KafkaApis.handle(KafkaApis.scala:59) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59) at java.lang.Thread.run(Thread.java:745)[2017-02-28 16:04:09,922] INFO [KafkaApi-0] Send the close connection response due to error handling produce request [clientId = , correlationId = 7, topicAndPartition = [versionTopic,0]] with Ack=0 (kafka.server.KafkaApis)[2017-02-28 16:04:10,096] INFO Closing socket connection to /10.101.48.240. (kafka.network.Processor)[2017-02-28 16:04:13,374] ERROR [KafkaApi-0] Error processing ProducerRequest with correlation id 12 from client on partition [versionTopic,0] (kafka.server.KafkaApis)kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012. at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:378)at kafka.log.Log$$anonfun$analyzeAndValidateMessageSet$1.apply(Log.scala:361) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32) at kafka.log.Log.analyzeAndValidateMessageSet(Log.scala:361) at kafka.log.Log.append(Log.scala:257) at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:379)at kafka.cluster.Partition$$anonfun$appendMessagesToLeader$1.apply(Partition.scala:365) at kafka.utils.Utils$.inLock(Utils.scala:535) at kafka.utils.Utils$.inReadLock(Utils.scala:541) at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:365) at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:291)at kafka.server.KafkaApis$$anonfun$appendToLocalLog$2.apply(KafkaApis.scala:282) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98)at scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:98) at scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:226) at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:39) at scala.collection.mutable.HashMap.foreach(HashMap.scala:98) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.AbstractTraversable.map(Traversable.scala:105) at kafka.server.KafkaApis.appendToLocalLog(KafkaApis.scala:282) at kafka.server.KafkaApis.handleProducerOrOffsetCommitRequest(KafkaApis.scala:204) at kafka.server.KafkaApis.handle(KafkaApis.scala:59) at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:59) at java.lang.Thread.run(Thread.java:745)[2017-02-28 16:04:13,375] INFO [KafkaApi-0] Send the close connection response due to error handling produce request [clientId = , correlationId = 12, topicAndPartition = [versionTopic,0]] with Ack=0 (kafka.server.KafkaApis)
注意输出中的:kafka.common.MessageSizeTooLargeException: Message size is 1048602 bytes which exceeds the maximum configured message size of 1000012.这句。
注意:当kafka一切正常,producer端发送也会出现这样的INFO:
2017-03-07 20:06:03 -[INFO] - [Verifying properties] - [kafka.utils.Logging$class:68]2017-03-07 20:06:04 -[INFO] - [Property metadata.broker.list is overridden to 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-03-07 20:06:04 -[INFO] - [Property serializer.class is overridden to kafka.serializer.StringEncoder] - [kafka.utils.Logging$class:68]2017-03-07 20:06:04 -[INFO] - [Fetching metadata from broker id:0,host:10.198.197.59,port:9092 with correlation id 0 for 1 topic(s) Set(testTopic)] - [kafka.utils.Logging$class:68]2017-03-07 20:06:04 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68]2017-03-07 20:06:04 -[INFO] - [Disconnecting from 10.198.197.59:9092] - [kafka.utils.Logging$class:68]2017-03-07 20:06:04 -[INFO] - [Connected to 10.198.197.59:9092 for producing] - [kafka.utils.Logging$class:68](之后producer发送数据)
看倒数三行,咋一看以为是出了异常,但事实上这是正常的INFO, 至于为什么先Connected又Disconnecting又Connected那就不得而知了,等博主翻阅了kafka的源码之后再来解释这个现象咯~
- Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常?
- kafka broker的常用配置
- kafka单条消息过大导致生产者程序发送到broker失败
- kafka broker启动流程和server结构
- MQTT client JAVA 和 MQTT broker
- broker
- broker
- broker
- broker
- Broker
- apache Kafka下线broker的操作
- Kafka的broker替补替换测试
- 修改kafka broker id产生的错误
- kafka提供的broker配置server.properties
- Kafka broker配置介绍
- kafka配置-----broker配置
- Kafka-broker配置说明
- Kafka broker配置介绍
- mysql (增强篇)------关联查询
- PAT-A1053
- vi/vim 基本使用方法
- [完全二分图生成树个数] BZOJ 4766 文艺计算姬
- BZOJ P2705[SDOI2012]Longge的问题
- Kafka发送超过broker限定大小的消息时Client和Broker端各自会有什么异常?
- 三羊献瑞,蓝桥杯2015年第3题
- 【abap-sql】限制OPEN SQL获取数据条数以及优化原则
- Spark Core源码分析之RDD基础
- 【DAY.10】php判断18位身份账号码是否正确(基于加权算法)
- PAT-A1054
- 蓝桥杯 历届试题 带分数
- 线段树
- Android 活动的启动模式