kafka文档(13)----0.10.1-Document-文档(5)-configures-consumer配置信息

来源:互联网 发布:矩阵乘法优化 编辑:程序博客网 时间:2024/06/05 08:46

3.3 Consumer Configs

In 0.9.0.0 we introduced the new Java consumer as a replacement for the older Scala-based simple and high-level consumers. The configs for both new and old consumers are described below.


3.3 Consumer 配置

0.9.0.0版本中,引入了新的java版本的consumer,用来替代老的Scala版本以及高水位的consumers。因此,配置分为新旧两种,具体如下所示:

3.3.1 New Consumer Configs

Below is the configuration for the new consumer:


3.3.1 新Consumer配置

下面是新consumer的配置

NAMEDESCRIPTIONTYPEDEFAULTVALID VALUESIMPORTANCEbootstrap.servers

A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the formhost1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).

host/port对的列表,用来建立与kafka的初始链接。客户端将使用列表中所有指定的servers-这个列表只影响客户端的初始化,客户端需要使用这个列表去查询所有servers的完整列表。列表格式应该为:host1:port1,host2,port2,....;因为这些server列表只是用来初始化发现完整的server列表(而完整的server列表可能在使用中发生变化,机器损坏,部署迁移等),这个表不需要包含所有server的ip和port(但是最好多于1个,预防这个server挂掉的风险,防止下次启动无法链接)

list  highkey.deserializer

Deserializer class for key that implements theDeserializer interface.


Deserializer接口的密钥的类的key

class  highvalue.deserializer

Deserializer class for value that implements theDeserializer interface.


实现Deserializer接口的Deserializer的值。

class  highfetch.min.bytes

The minimum amount of data the server should return for a fetch request. If insufficient data is available the request will wait for that much data to accumulate before answering the request. The default setting of 1 byte means that fetch requests are answered as soon as a single byte of data is available or the fetch request times out waiting for data to arrive. Setting this to something greater than 1 will cause the server to wait for larger amounts of data to accumulate which can improve server throughput a bit at the cost of some additional latency.


server返回给抓取请求的最小数据量。在返回消息量小于这个值时,请求会一直等待。默认设置为1,即只要有一个字节就可以立刻返回,而不用等到超时。增大这个值,在一定程度上可以改善吞吐量,但是有可能带来额外的延迟。

int1[0,...]highgroup.id

A unique string that identifies the consumer group this consumer belongs to. This property is required if the consumer uses either the group management functionality by using subscribe(topic) or the Kafka-based offset management strategy.


代表consumer组的唯一字符串。当consumer通过subscribe(topic)或者基于kafka的offset管理策略来使用group管理函数时,必须要有group.id

string"" highheartbeat.interval.ms

The expected time between heartbeats to the consumer coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the consumer's session stays active and to facilitate rebalancing when new consumers join or leave the group. The value must be set lower thansession.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.


当使用Kafka的group管理用法时,consumer协作器两次心跳之间的时间间隔。心跳链接用来保证consumer的会话依然活跃,以及在新consumer加入consumer group时可以重新进行负载均衡。这个值要比session.timeout.ms小,但是一般要比session.timeout.ms的1/3要打。这个值可以适当的减小,以控制重负载均衡的时间。

int3000 highmax.partition.fetch.bytes

The maximum amount of data per-partition the server will return. If the first message in the first non-empty partition of the fetch is larger than this limit, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). See fetch.max.bytes for limiting the consumer request size


server返回消息中针对每个partition数据请求的最大数据量。这个值也不是绝对的,如果请求的第一个非空partition的第一条消息大于这个值,则消息依然会返回给consumer,以保证继续进行。broker可以接受的消息尺寸通过message.max.bytes(broker配置)或者max.message.bytes(topic配置)来设置。查看fetch.max.bytes获取consumer请求的最大消息尺寸。

int1048576[0,...]highsession.timeout.ms

The timeout used to detect consumer failures when using Kafka's group management facility. The consumer sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this consumer from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration bygroup.min.session.timeout.ms andgroup.max.session.timeout.ms.


当使用Kafka group管理用法时,这个超时时间用来检测consumer是否失效。consumer通过发送心跳信息给broker,用来表明自己还有效。如果broker在这个超时时间内没有收到来自consumer的心跳信息,则broker会从consumer group中移除这个consumer,并重新进行负载均衡。注意,这个值必须在broker配置的允许范围之内:即group.min.session.timeout.ms和group.max.session.timeout.ms之间。

int10000 highssl.key.password

The password of the private key in the key store file. This is optional for client.


存储在密钥文件中私有密钥。这个是可选的

passwordnull highssl.keystore.locationThe location of the key store file. This is optional for client and can be used for two-way authentication for client.stringnull highssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.


密钥文件路径。这个是可选的

passwordnull highssl.truststore.location

The location of the trust store file.


受信任文件的位置

stringnull highssl.truststore.password

The password for the trust store file.


受信任文件的密码

passwordnull highauto.offset.reset

What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): 当kafka没有初始offset或者server中也不存在任何初始化offset时,consumer遇到这种情况应该从哪里开始获取消息

  • earliest: automatically reset the offset to the earliest offset自动设置offset为最早的offset
  • latest: automatically reset the offset to the latest offset自动设置offset为最新的offset
  • none: throw exception to the consumer if no previous offset is found for the consumer's group如果没有发现此consumer的offset值,则抛出异常
  • anything else: throw exception to the consumer.抛出异常
stringlatest[latest, earliest, none]mediumconnections.max.idle.ms

Close idle connections after the number of milliseconds specified by this config.


空闲链接的超时时间:server socket处理线程会关闭超时的链接。

long540000 mediumenable.auto.commit

If true the consumer's offset will be periodically committed in the background.


如果设置为true,则consumer的offset会在后台周期性的上传

booleantrue mediumexclude.internal.topics

Whether records from internal topics (such as offsets) should be exposed to the consumer. If set to true the only way to receive records from an internal topic is subscribing to it.


内部topics(例如offsets)是否需要暴漏给consumer。如果设置true,从内部topic获取数据的方式只能是订阅它。

booleantrue mediumfetch.max.bytes

The maximum amount of data the server should return for a fetch request. This is not an absolute maximum, if the first message in the first non-empty partition of the fetch is larger than this value, the message will still be returned to ensure that the consumer can make progress. The maximum message size accepted by the broker is defined via message.max.bytes (broker config) ormax.message.bytes (topic config). Note that the consumer performs multiple fetches in parallel.


server针对抓取请求的应答中所包含的最大字节数。但是这个值并不是绝对的,如果请求中第一个非空partition的第一条消息大于这个值,这条消息仍然会返回给客户端,以保证继续进行。broker可以接受的最大消息尺寸通过message.max.bytes(broker config)或者max.message.bytes(topic config)确定。注意,consumer可以同时发布多条请求信息。

int52428800[0,...]mediummax.poll.interval.ms

The maximum delay between invocations of poll() when using consumer group management. This places an upper bound on the amount of time that the consumer can be idle before fetching more records. If poll() is not called before expiration of this timeout, then the consumer is considered failed and the group will rebalance in order to reassign the partitions to another member.


当使用consumer 组管理时,在两次调用poll()之间的停留时间。这个值指明了consumer在抓取更多消息之前的处于空闲的最长时间。如果poll()在这个值指定的超时之前没有调用,则consumer会被认定为失效,consumer group会重新负载均衡,并重新分配失效consumer负责的partitions给其它consumer成员。

int300000[1,...]mediummax.poll.records

The maximum number of records returned in a single call to poll().


一次单独调用poll()可以返回的消息的最大条数。

int500[1,...]mediumpartition.assignment.strategy

The class name of the partition assignment strategy that the client will use to distribute partition ownership amongst consumer instances when group management is used


partitions的分配策略的类名。当使用group管理策略时,客户端用来将来将partitions分配给组中consumer实例。

list[class org.apache.kafka.clients.consumer.RangeAssignor] mediumreceive.buffer.bytes

The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.


TCP接受缓存的大小(SO_RCVBUF)。如果设置为-1,则使用OS默认值.

int65536[-1,...]mediumrequest.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.


客户端等待broker应答的超时时间。如果超时了,客户端没有收到应答,如果必要的话可能会重发请求,如果重试都失败了也可能会报请求失败

int305000[0,...]mediumsasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.


kafka运行的Kerberos主机名。可以在Kafka's JAAS配置或者Kafka's 配置中定义。

stringnull mediumsasl.mechanism

SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.


客户端链接进行通信的SASL机制。默认时GSSAPI

stringGSSAPI mediumsecurity.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.


brokers之间通信使用的安全协议。正确值为:PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

stringPLAINTEXT mediumsend.buffer.bytes

The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.


TCP发送的socket的SO_SNDBUF缓存。如果设置为-1,将使用OS的默认值

int131072[-1,...]mediumssl.enabled.protocols

The list of protocols enabled for SSL connections.


SSL链接的协议

list[TLSv1.2, TLSv1.1, TLSv1] mediumssl.keystore.type

The file format of the key store file. This is optional for client.


密钥文件的文件格式。对客户端来说是可选的。

stringJKS mediumssl.protocol

The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.


生成SSLContext的SSL协议。默认配置时TLS,适用于大部分情况。最近JVMS支持的协议包括:TLS,TLSv1.1,TLSv1.2.
SSL,SSLv2,SSLv3在老版本的JVMS中可用,但是由于知名的安全漏洞,它们并不受欢迎。

stringTLS mediumssl.provider

The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.


SSL链接安全提供者名字。默认是JVM

stringnull mediumssl.truststore.type

The file format of the trust store file.


受信任的文件的文件格式

stringJKS mediumauto.commit.interval.ms

The frequency in milliseconds that the consumer offsets are auto-committed to Kafka if enable.auto.commitis set to true.


consumer offsets可以自动提交到kafka的频率(微秒), 如果设置

int5000[0,...]lowcheck.crcs

Automatically check the CRC32 of the records consumed. This ensures no on-the-wire or on-disk corruption to the messages occurred. This check adds some overhead, so it may be disabled in cases seeking extreme performance.


自动检查消费的消息的CRC32.这个检查保证了消息在发送过程中没有损坏,或者在磁盘上没有损坏。这个检查有可能增加负担,因此对性能要求比较高的情况可能禁用这个检查。

boolean

true lowclient.id

An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.


请求中会附带上id 字符串,用来标识客户端。目的是追踪请求的来源,用于检查某些请求是否来自非法ip/port。

string"" lowfetch.max.wait.ms

The maximum amount of time the server will block before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by fetch.min.bytes.


server在应答抓取请求之前可以阻塞的最长时间,如果没有足够的消息满足fetch.min.bytes,server一般会阻塞这么长的时间,以获取足够的消息

int500[0,...]lowinterceptor.classes

A list of classes to use as interceptors. Implementing theConsumerInterceptor interface allows you to intercept (and possibly mutate) records received by the consumer. By default, there are no interceptors.


用作拦截器的类的列表。接口ConsumerInterceptor可以拦截部分消息,以防它们发送到kafka集群。默认情况下没有拦截器

listnull lowmetadata.max.age.ms

The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.


更新metadata的时间间隔,无论partition的leader是否发生变换或者topic其它的元数据是否发生变化。

long300000[0,...]lowmetric.reporters

A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.


用于实现指标统计的类的列表。MetricReporter接口允许调用实现指标统计的插件类。JmxReporter总是包含注册JMX统计。

list[] lowmetrics.num.samples

The number of samples maintained to compute metrics.


维护计算指标的样本数

int2[1,...]lowmetrics.sample.window.ms

The window of time a metrics sample is computed over.


度量样本的计算的时长

long30000[0,...]lowreconnect.backoff.ms

The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.


重连给定host之前的等待时间。避免频繁的重连某个host。这个backoff时间也设定了consumer请求broker的重试等待时间。

long50[0,...]lowretry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.


重新发送失败请求给某个topic partition之前的最长等待时间,避免极短时间内频繁的重试。

long100[0,...]lowsasl.kerberos.kinit.cmd

Kerberos kinit command path.


Kerberos kinit命令路径

string/usr/bin/kinit lowsasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.


在重试之间登陆线程的睡眠时间

long60000 lowsasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.


添加到更新时间的随机抖动的百分比。

double0.05 lowsasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.


重新进行登录验证刷新之前,登录线程的睡眠时间

double0.8 lowssl.cipher.suites

A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.


密码套件列表。 这是一种集认证,加密,MAC和密钥交换算法一块的命名组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。

listnull lowssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate server hostname using server certificate.


端点标识算法,使用服务器证书验证服务器主机名。

stringnull lowssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.


密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法

stringSunX509 lowssl.secure.random.implementation

The SecureRandom PRNG implementation to use for SSL cryptography operations.


用于SSL加密操作的SecureRandom PRNG实现。

stringnull lowssl.trustmanager.algorithm

The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.


ssl链接信任管理者工厂的算法。默认时JVM支持的算法。

stringPKIX low

3.3.2 Old Consumer Configs

The essential old consumer configurations are the following:

基本的consumer配置是:

  • group.id
  • zookeeper.connect
PROPERTYDEFAULTDESCRIPTIONgroup.id 

A string that uniquely identifies the group of consumer 

processes to which this consumer belongs. By setting 

the same group id multiple processes indicate that they

 are all part of the same consumer group.



代表consumer组的唯一字符串。

当consumer通过subscribe(topic)

或者基于kafka的offset管理策略来

使用group管理函数时,必须要有group.id

zookeeper.connect 

Specifies the ZooKeeper connection string in the form

 hostname:port where host and port are the host 

and port of a ZooKeeper server. To allow connecting 

through other ZooKeeper nodes when that ZooKeeper

 machine is down you can also specify multiple hosts

 in the form

 hostname1:port1,hostname2:port2,hostname3:port3.

The server may also have a ZooKeeper chroot path as 

part of its ZooKeeper connection string which puts its data

 under some path in the global ZooKeeper namespace.

 If so the consumer should use the same chroot path

 in its connection string. For example to give a chroot

 path of /chroot/path you would give the connection 

string as

hostname1:port1,hostname2:port2,hostname3:port3/chroot/path.



指定zookeeper的连接的字符串,格式是hostname:port,

此处host和port都是zookeeper server的host和port,

为避免某个zookeeper 机器宕机之后失联,

你可以指定多个hostname:port,使用逗号作为分隔:
hostname1:port1,hostname2:port2,hostname3:port3
可以在zookeeper连接字符串中加入zookeeper的chroot路径,

此路径用于存放他自己的数据,方式:
hostname1:port1,hostname2:port2,hostname3:port3/chroot/path

consumer.idnull

Generated automatically if not set.



如果不设置的话会自动产生

socket.timeout.ms30 * 1000

The socket timeout for network requests. The actual timeout

 set will be max.fetch.wait + socket.timeout.ms.


网络请求的超时限制。真实的超时限制是 

  max.fetch.wait+socket.timeout.ms

socket.receive.buffer.bytes64 * 1024

The socket receive buffer for network requests


网络请求的socket的receive缓存

fetch.message.max.bytes1024 * 1024

The number of bytes of messages to attempt to fetch for 

each topic-partition in each fetch request. These bytes will 

be read into memory for each partition, so this helps control

 the memory used by the consumer. The fetch request size

 must be at least as large as the maximum message size 

the server allows or else it is possible for the producer to 

send messages larger than the consumer can fetch.



每次fetch请求中,针对每次fetch消息的最大字节数。

这些字节将会读到用于每个partition的内存中,因此,

此设置将会控制consumer所使用的memory大小。

这个fetch请求尺寸必须至少和server允许的

最大消息尺寸相等,否则,producer可能发送的

消息尺寸大于consumer所能消耗的尺寸。

num.consumer.fetchers1

The number fetcher threads used to fetch data.


用于抓取数据的抓取线程数

auto.commit.enabletrue

If true, periodically commit to ZooKeeper the offset 

of messages already fetched by the consumer. 

This committed offset will be used when the process fails 

as the position from which the new consumer will begin.


如果为真,consumer所fetch的消息的offset将会

自动的同步到zookeeper。这项提交的offset将在

进程挂掉时,由新的consumer使用

auto.commit.interval.ms60 * 1000

The frequency in ms that the consumer offsets

 are committed to zookeeper.


consumer向zookeeper提交offset的频率,

单位是微秒

queued.max.message.chunks2

Max number of message chunks buffered for consumption. 

Each chunk can be up to fetch.message.max.bytes.



用于缓存消息的最大数目,以供consumption。

每个chunk必须和fetch.message.max.bytes相同

rebalance.max.retries4

When a new consumer joins a consumer group the set 

of consumers attempt to "rebalance" the load to assign 

partitions to each consumer. If the set of consumers 

changes while this assignment is taking place the 

rebalance will fail and retry. This setting controls the 

maximum number of attempts before giving up.


当新的consumer加入到consumer  group时,

consumers集合试图重新平衡分配到每个

consumer的partitions数目。如果consumers

集合改变了,当分配正在执行时,

这个重新平衡会失败并重入。

这个配置控制了放弃前的最大重试次数

fetch.min.bytes1

The minimum amount of data the server should 

return for a fetch request. If insufficient data is 

available the request will wait for that much data to 

accumulate before answering the request.


每次fetch请求时,server应该返回的最小字节数。

如果没有足够的数据返回,请求会等待,

直到足够的数据才会返回。

fetch.wait.max.ms100

The maximum amount of time the server will 

block before answering the fetch reques

t if there isn't sufficient data to immediately 

satisfy fetch.min.bytes


如果没有足够的数据能够满足fetch.min.bytes,

则此项配置是指在应答fetch请求之前,

server会阻塞的最大时间。

rebalance.backoff.ms2000

Backoff time between retries during rebalance. 

If not set explicitly, the value in

 zookeeper.sync.time.ms is used.


在重试reblance之前backoff时间。

如果没有设置,则使用zookeeper.sync.time.ms

refresh.leader.backoff.ms200

Backoff time to wait before trying to determine the 

leader of a partition that has just lost its leader.


在试图确定某个partition的leader是否失去

他的leader地位之前,需要等待的backoff时间

auto.offset.resetlargest

What to do when there is no initial offset in ZooKeeper 

or if an offset is out of range:
* smallest : automatically reset the offset to the smallest offset
* largest : automatically reset the offset to the largest offset
* anything else: throw exception to the consumer



zookeeper中没有初始化的offset时,如果offset是以下值的回应:
smallest:自动复位offset为smallest的offset
largest:自动复位offset为largest的offset
anything  else:向consumer抛出异常

consumer.timeout.ms-1

Throw a timeout exception to the consumer if no message 

is available for consumption after the specified interval


如果没有消息可用,即使等待特定的时间之后

也没有,则抛出超时异常

exclude.internal.topicstrue

Whether messages from internal topics (such as offsets) 

should be exposed to the consumer.


是否将内部topics的消息暴露给consumer

client.idgroup id value

The client id is a user-specified string sent in each request

 to help trace calls. It should logically identify the application

 making the request.


是用户特定的字符串,用来在每次请求中帮助跟踪调用。

它应该可以在逻辑上确认产生这个请求的应用

zookeeper.session.timeout.ms 6000

ZooKeeper session timeout. If the consumer fails to heartbeat

 to ZooKeeper for this period of time it is considered dead 

and a rebalance will occur.


zookeeper 会话的超时限制。如果consumer在这段时间

内没有向zookeeper发送心跳信息,则它会被认为挂掉了,

并且reblance将会产生

zookeeper.connection.timeout.ms6000

The max time that the client waits while establishing a

 connection to zookeeper.


客户端在建立通zookeeper连接中的最大等待时间

zookeeper.sync.time.ms 2000

How far a ZK follower can be behind a ZK leader


ZK follower可以落后ZK leader的最大时间

offsets.storagezookeeper

Select where offsets should be stored (zookeeper or kafka).


用于存放offsets的地点: zookeeper或者kafka

offsets.channel.backoff.ms1000

The backoff period when reconnecting the offsets channel 

or retrying failed offset fetch/commit requests.


重新连接offsets channel或者是重试失败的offset的

fetch/commit请求的backoff时间

offsets.channel.socket.timeout.ms10000

Socket timeout when reading responses for offset 

fetch/commit requests. This timeout is also used for 

ConsumerMetadata requests that are used to query 

for the offset manager.


当读取offset的fetch/commit请求回应的socket 

超时限制。此超时限制是被consumerMetadata

请求用来请求offset管理

offsets.commit.max.retries5

Retry the offset commit up to this many times on failure. 

This retry count only applies to offset commits during 

shut-down. It does not apply to commits originating 

from the auto-commit thread. It also does not apply to 

attempts to query for the offset coordinator before 

committing offsets. i.e., if a consumer metadata 

request fails for any reason, it will be retried and that 

retry does not count toward this limit.


失败时重试offset commit的最大次数。这个重试

只应用于offset  commits失败的时候。它不适用从

自动提交线程提交,也不适用于在提交offsets之前

试图请求offset协作器。例如,如果consumer 元数据

请求由于某些原因失败,将会重试,

而且不会计入这个重试次数。

dual.commit.enabledtrue

If you are using "kafka" as offsets.storage, you can 

dual commit offsets to ZooKeeper (in addition to Kafka). 

This is required during migration from zookeeper-based 

offset storage to kafka-based offset storage. With 

respect to any given consumer group, it is

 safe to turn this off after all instances within that 

group have been migrated to the new

 version that commits offsets to the broker

 (instead of directly to ZooKeeper).


如果使用“kafka”作为offsets.storage,你可以二次提交offset到

zookeeper(还有一次是提交到kafka)。在zookeeper-based的

offset  storage到kafka-based的offset storage迁移时,

这是必须的。对任意给定的consumer  group来说,

比较安全的建议是当完成迁移之后就关闭这个选项

partition.assignment.strategyrange

Select between the "range" or "roundrobin" strategy 

for assigning partitions to consumer streams.

The round-robin partition assignor lays out all the

 available partitions and all the available consumer 

threads. It then proceeds to do a round-robin 

assignment from partition to consumer thread. 

If the subscriptions of all consumer instances are identical, 

then the partitions will be uniformly distributed. 

(i.e., the partition ownership counts will be within a 

delta of exactly one across all consumer threads.) 

Round-robin assignment is permitted only if:

 (a) Every topic has the same number of streams within

 a consumer instance (b) The set of subscribed topics 

is identical for every consumer instance within the group.

Range partitioning works on a per-topic basis. 

For each topic, we lay out the available partitions 

in numeric order and the consumer threads in 

lexicographic order. We then divide the number of 

partitions by the total number of consumer streams (threads) to 

determine the number of partitions to assign to each 

consumer. If it does not evenly divide, then the first 

few consumers will have one extra partition.



在“range”和“roundrobin”策略之间选择一种作为

分配partitions给consumer 数据流的策略; 

循环的partition分配器分配所有可用的partitions以及

所有可用consumer  线程。它会将partition循环的

分配到consumer线程上。如果所有consumer实例的

订阅都是确定的,则partitions的划分是确定的分布。

循环分配策略只有在以下条件满足时才可以:

(1)每个topic在每个consumer实力上都有同样数量

的数据流。

(2)订阅的topic的集合对于consumer  group中每个

consumer实例来说都是确定的。

More details about consumer configuration can be found in the scala class kafka.consumer.ConsumerConfig.

0 0