kafka文档(14)----0.10.1-Document-文档(6)-configures-Kafka Connect配置信息

来源:互联网 发布:红痘印 知乎 编辑:程序博客网 时间:2024/06/11 07:19

3.4 Kafka Connect Configs

Below is the configuration of the Kafka Connect framework.


下面是kafka Connect框架的配置

NAMEDESCRIPTIONTYPEDEFAULTVALID VALUESIMPORTANCEconfig.storage.topic

kafka topic to store configs


存储配置的kafka topic

string  highgroup.id

A unique string that identifies the Connect cluster group this worker belongs to.


唯一的标识,用于指明当前worker属于哪一个Connect cluster group

string  highkey.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.


转换的类:用于将kafka Connect格式与写入kafka的序列化格式之间进行转换。这个配置控制了从kafka读写消息的keys的格式,因为这个是 connectors的个性化特征,即它可以是以序列化格式工作的任何connector。例如,通用的格式为:JSON和Avro

class  highoffset.storage.topic

kafka topic to store connector offsets in


存储connector offsets的topic

string  highstatus.storage.topic

kafka topic to track connector and task status


追踪connector和任务状态的kafka topic

string  highvalue.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro.


在kafka Connect格式和写入kafka的序列化格式之间进行转换的转换器。这个控制了读写kafka的消息中的values的格式,因为这个是connectors个性化特征,因此可以支持以任何序列化格式工作的任何connector。例如:JSON和Avro

class  highinternal.key.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the keys in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation.


用于在kafka Connect格式和读写kafka的序列化格式之间进行转换的转换器。这个控制了从kafka读写消息的keys的格式。因为这个是connectors的个性化特征,因此可以是以任何序列化格式工作的connectors。例如,JSON和Avro。此设置控制了框架内部预定数据的格式,例如配置和offsets,因此,用户一般可以使用任何Converter实现。

class  lowinternal.value.converter

Converter class used to convert between Kafka Connect format and the serialized form that is written to Kafka. This controls the format of the values in messages written to or read from Kafka, and since this is independent of connectors it allows any connector to work with any serialization format. Examples of common formats include JSON and Avro. This setting controls the format used for internal bookkeeping data used by the framework, such as configs and offsets, so users can typically use any functioning Converter implementation.


用于在kafka Connect格式和读写kafka的序列化格式之间进行转换的转换器。这个控制了从kafka读写消息的values的格式。因为这个是connectors的个性化特征,因此可以是以任何序列化格式工作的connectors。例如,JSON和Avro。此设置控制了框架内部预定数据的格式,例如配置和offsets,因此,用户一般可以使用任何Converter实现。

class  lowbootstrap.servers

A list of host/port pairs to use for establishing the initial connection to the Kafka cluster. The client will make use of all servers irrespective of which servers are specified here for bootstrapping—this list only impacts the initial hosts used to discover the full set of servers. This list should be in the formhost1:port1,host2:port2,.... Since these servers are just used for the initial connection to discover the full cluster membership (which may change dynamically), this list need not contain the full set of servers (you may want more than one, though, in case a server is down).


host/port对的列表,用来建立与kafka的初始链接。客户端将使用列表中所有指定的servers-这个列表只影响客户端的初始化,客户端需要使用这个列表去查询所有servers的完整列表。列表格式应该为:host1:port1,host2,port2,....;因为这些server列表只是用来初始化发现完整的server列表(而完整的server列表可能在使用中发生变化,机器损坏,部署迁移等),这个表不需要包含所有server的ip和port(但是最好多于1个,预防这个server挂掉的风险,防止下次启动无法链接)

list[localhost:9092] highheartbeat.interval.ms

The expected time between heartbeats to the group coordinator when using Kafka's group management facilities. Heartbeats are used to ensure that the worker's session stays active and to facilitate rebalancing when new members join or leave the group. The value must be set lower than session.timeout.ms, but typically should be set no higher than 1/3 of that value. It can be adjusted even lower to control the expected time for normal rebalances.


当使用Kafka的group管理用法时,consumer协作器两次心跳之间的时间间隔。心跳链接用来保证consumer的会话依然活跃,以及在新consumer加入consumer group时可以重新进行负载均衡。这个值要比session.timeout.ms小,但是一般要比session.timeout.ms的1/3要打。这个值可以适当的减小,以控制重负载均衡的时间。

int3000 highrebalance.timeout.ms

The maximum allowed time for each worker to join the group once a rebalance has begun. This is basically a limit on the amount of time needed for all tasks to flush any pending data and commit offsets. If the timeout is exceeded, then the worker will be removed from the group, which will cause offset commit failures.


重新负载均衡开始之后,每个worker重新加入group的最大等待时间。这个设置限制了所有任务需要回刷待定数据以及提交offsets的总时长。如果超时,worker会从group中移除,有可能会导致offset提交失败。

int60000 highsession.timeout.ms

The timeout used to detect worker failures. The worker sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove the worker from the group and initiate a rebalance. Note that the value must be in the allowable range as configured in the broker configuration bygroup.min.session.timeout.ms andgroup.max.session.timeout.ms.


用于检测worker失败的超时时间。worker周期性的发送心跳信息,用来向broker表明处于活跃状态。如果broker在超时时间之内没有收到心跳信息,broker会将worker从group中移除,并重新进行负载均衡。注意:这个值必须在可允许的范围之内:group.min.session.timeout.ms和group.max.session.timeout.ms之间。

int10000 highssl.key.password

The password of the private key in the key store file. This is optional for client.



存储在密钥文件中私有密钥。这个是可选的

passwordnull highssl.keystore.location

The location of the key store file. This is optional for client and can be used for two-way authentication for client.


密钥文件的位置。对于客户端来说是可选的,而是可以用来做双向认证。

stringnull highssl.keystore.password

The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured.


密钥存储文件中密码。客户端是可选的,只有当ssl.keystore.location配置时才能使用。

passwordnull highssl.truststore.location

The location of the trust store file.


受信任文件的位置

stringnull highssl.truststore.password

The password for the trust store file.


受信任文件的密码

passwordnull highconnections.max.idle.ms

Close idle connections after the number of milliseconds specified by this config.


空闲链接的超时时间:server socket处理线程会关闭超时的链接。

long540000 mediumreceive.buffer.bytes

The size of the TCP receive buffer (SO_RCVBUF) to use when reading data. If the value is -1, the OS default will be used.


TCP接受缓存的大小(SO_RCVBUF)。如果设置为-1,则使用OS默认值.

int32768[0,...]mediumrequest.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted.


客户端等待broker应答的超时时间。如果超时了,客户端没有收到应答,如果必要的话可能会重发请求,如果重试都失败了也可能会报请求失败

int40000[0,...]mediumsasl.kerberos.service.name

The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config.


kafka运行的Kerberos主机名。可以在Kafka's JAAS配置或者Kafka's 配置中定义。

stringnull mediumsasl.mechanism

SASL mechanism used for client connections. This may be any mechanism for which a security provider is available. GSSAPI is the default mechanism.


客户端链接进行通信的SASL机制。默认时GSSAPI

stringGSSAPI mediumsecurity.protocol

Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.


brokers之间通信使用的安全协议。正确值为:PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

stringPLAINTEXT mediumsend.buffer.bytes

The size of the TCP send buffer (SO_SNDBUF) to use when sending data. If the value is -1, the OS default will be used.


TCP发送的socket的SO_SNDBUF缓存。如果设置为-1,将使用OS的默认值

int131072[0,...]mediumssl.enabled.protocols

The list of protocols enabled for SSL connections.


SSL链接的协议

list[TLSv1.2, TLSv1.1, TLSv1] mediumssl.keystore.type

The file format of the key store file. This is optional for client.


密钥文件的文件格式。对客户端来说是可选的。

stringJKS mediumssl.protocol

The SSL protocol used to generate the SSLContext. Default setting is TLS, which is fine for most cases. Allowed values in recent JVMs are TLS, TLSv1.1 and TLSv1.2. SSL, SSLv2 and SSLv3 may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities.


生成SSLContext的SSL协议。默认配置时TLS,适用于大部分情况。最近JVMS支持的协议包括:TLS,TLSv1.1,TLSv1.2.
SSL,SSLv2,SSLv3在老版本的JVMS中可用,但是由于知名的安全漏洞,它们并不受欢迎。

stringTLS mediumssl.provider

The name of the security provider used for SSL connections. Default value is the default security provider of the JVM.


SSL链接安全提供者名字。默认是JVM

stringnull mediumssl.truststore.type

The file format of the trust store file.


受信任文件的文件格式

stringJKS mediumworker.sync.timeout.ms

When the worker is out of sync with other workers and needs to resynchronize configurations, wait up to this amount of time before giving up, leaving the group, and waiting a backoff period before rejoining.


当某个worker与其它workers无法同步以及需要重新同步配置时,这个超时时间是放弃之前的等待时间,否则就会放弃同步,离开group,然后等待下一次重新加入。

int3000 mediumworker.unsync.backoff.ms

When the worker is out of sync with other workers and fails to catch up within worker.sync.timeout.ms, leave the Connect cluster for this long before rejoining.


当某个worker与其它workers不同步并在超时时间之内无法重新同步时,离开Connect集群并等待下一次重新加入的时长。

int300000 mediumaccess.control.allow.methods

Sets the methods supported for cross origin requests by setting the Access-Control-Allow-Methods header. The default value of the Access-Control-Allow-Methods header allows cross origin requests for GET, POST and HEAD.


通过设置Access-Control-Allow-Methods信息头的方式设置交叉原始请求支持的方式。Access-Control-Allow-Methods信息头的默认值允许GET,POST以及HEAD

string"" lowaccess.control.allow.origin

Value to set the Access-Control-Allow-Origin header to for REST API requests.To enable cross origin access, set this to the domain of the application that should be permitted to access the API, or '*' to allow access from any domain. The default value only allows access from the domain of the REST API.


设置Access-Control-Allow-Origin的头信息,用于REST API的请求。为了能够交叉原始使用,将此设置为应允许访问API的应用程序的域,或者设置为‘*’,以允许访问任何域。默认值仅允许访问REST API的域。

string"" lowclient.id

An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.


请求中会附带上id 字符串,用来标识客户端。目的是追踪请求的来源,用于检查某些请求是否来自非法ip/port。

string"" lowmetadata.max.age.ms

The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.


更新metadata的时间间隔,无论partition的leader是否发生变换或者topic其它的元数据是否发生变化。

long300000[0,...]lowmetric.reporters

A list of classes to use as metrics reporters. Implementing the MetricReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics.


用于实现指标统计的类的列表。MetricReporter接口允许调用实现指标统计的插件类。JmxReporter总是包含注册JMX统计。

list[] lowmetrics.num.samples

The number of samples maintained to compute metrics.


维护计算指标的样本数

int2[1,...]lowmetrics.sample.window.ms

The window of time a metrics sample is computed over


度量样本的计算的时长.

long30000[0,...]lowoffset.flush.interval.ms

Interval at which to try committing offsets for tasks


tasks尝试提交offsets的时间间隔

long60000 lowoffset.flush.timeout.ms

Maximum number of milliseconds to wait for records to flush and partition offset data to be committed to offset storage before cancelling the process and restoring the offset data to be committed in a future attempt.


等待回刷纪录或者提交partition的offset的最长时间,超时则会重试

long5000 lowreconnect.backoff.ms

The amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all requests sent by the consumer to the broker.


在重连给定host之前的等待时间,避免极短时间内频繁链接某个host。

long50[0,...]lowrest.advertised.host.name

If this is set, this is the hostname that will be given out to other workers to connect to.


如果设置此选项,则此host name是其它workers链接时的host name

stringnull lowrest.advertised.port

If this is set, this is the port that will be given out to other workers to connect to.


如果设置了此选项,这个port就是给其它worker进行链接的port

intnull lowrest.host.name

Hostname for the REST API. If this is set, it will only bind to this interface.


用于REST API的hostname。如果设置,则只会绑定到这个接口。

stringnull lowrest.port

Port for the REST API to listen on.


对REST API监听的port

int8083 lowretry.backoff.ms

The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios.


重新发送失败请求的等待时间。避免某些失败情况下频繁发送请求。

long100[0,...]lowsasl.kerberos.kinit.cmd

Kerberos kinit command path.


Kerberos kinit命令路径

string/usr/bin/kinit lowsasl.kerberos.min.time.before.relogin

Login thread sleep time between refresh attempts.


在重试之间登陆线程的睡眠时间

long60000 lowsasl.kerberos.ticket.renew.jitter

Percentage of random jitter added to the renewal time.


添加到更新时间的随机抖动的百分比。

double0.05 lowsasl.kerberos.ticket.renew.window.factor

Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket.


重新进行登录验证刷新之前,登录线程的睡眠时间

double0.8 lowssl.cipher.suites

A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported.


密码套件列表。 这是一种集认证,加密,MAC和密钥交换算法一块的命名组合,用于使用TLS或SSL网络协议协商网络连接的安全设置。 默认情况下,支持所有可用的密码套件。

listnull lowssl.endpoint.identification.algorithm

The endpoint identification algorithm to validate server hostname using server certificate.


端点标识算法,使用服务器证书验证服务器主机名。

stringnull lowssl.keymanager.algorithm

The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine.


密钥管理器工厂用于SSL连接的算法。 默认值是为Java虚拟机配置的密钥管理器工厂算法

stringSunX509 lowssl.secure.random.implementation

The SecureRandom PRNG implementation to use for SSL cryptography operations.


用于SSL加密操作的SecureRandom PRNG实现。

stringnull lowssl.trustmanager.algorithm

The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine.


ssl链接信任管理者工厂的算法。默认时JVM支持的算法。

stringPKIX lowtask.shutdown.graceful.timeout.ms

Amount of time to wait for tasks to shutdown gracefully. This is the total amount of time, not per task. All task have shutdown triggered, then they are waited on sequentially.


tasks在优雅退出前的等待时间。此设置时所有任务的退出等待时间,不是针对每个任务来说的。所有任务都被触发推出,那么会顺序的等待退出。

long5000 low
0 0
原创粉丝点击