confluent环境谨慎删除topic

来源:互联网 发布:小学校园网络直播新闻 编辑:程序博客网 时间:2024/06/07 10:26
  1. 关注一段代码
    kafka-connect-hdfs-2.0.0\src\main\java\io\confluent\connect\hdfs\TopicPartitionWriter.java
 private void writeRecord(SinkRecord record) throws IOException {    long expectedOffset = offset + recordCounter;    if (offset == -1) {      offset = record.kafkaOffset();    } else if (record.kafkaOffset() != expectedOffset) {      // Currently it's possible to see stale data with the wrong offset after a rebalance when you      // rewind, which we do since we manage our own offsets. See KAFKA-2894.      if (!sawInvalidOffset) {        log.info(            "Ignoring stale out-of-order record in {}-{}. Has offset {} instead of expected offset {}",            record.topic(), record.kafkaPartition(), record.kafkaOffset(), expectedOffset);      }      sawInvalidOffset = true;      return;    }
  1. 看一段日志
[2016-07-01 18:19:50,199] INFO Ignoring stale out-of-order record in beaver_http_response-1. Has offset 122980245 instead of expected offset 96789608 (io.confluent.connect.hdfs.TopicPartitionWriter:470)[2016-07-01 18:19:50,200] INFO Starting commit and rotation for topic partition beaver_http_response-1 with start offsets {} and end offsets {} (io.confluent.connect.hdfs.TopicPartitionWriter:267)

这段轻描淡写的日志,就是数据死活近不了hdfs的重要线索。一旦offset对不牢,就不会写入数据了。

0 0
原创粉丝点击