kafka监控Chaperone Client
来源:互联网 发布:网络推广新手 编辑:程序博客网 时间:2024/06/03 17:55
ChaperoneClient
ChaperoneClient
是一个侵入式的组件,本身并不作为一个服务。
主要的是MessageTracker
的track()
方法,代码如下
public void track(double timestamp, int msgCount) { long currentTimeMillis = 0; timeBucketsRWLock.readLock().lock(); try { TimeBucketMetadata timeBucket = getTimeBucket(timestamp); //该时间桶内Message个数 timeBucket.msgCount.addAndGet(msgCount); currentTimeMillis = System.currentTimeMillis(); timeBucket.lastMessageTimestampSeenInSec = timestamp; double latency = currentTimeMillis - (timestamp * 1000); for (int i = 0; i < msgCount; i++) { timeBucket.latencyStats.addValue(latency); } } finally { timeBucketsRWLock.readLock().unlock(); } //时间桶个数达到一定数量或当前时间大于应该反馈的时间后进行反馈 if (timeBucketCount.get() >= reportFreqBucketCount || currentTimeMillis > nextReportTime.get()) { report(); } }
调用该方法,需要传入一个timestamp
。调用track
方法后,getTimeBucket()
方法会根据传入的timestamp
获得该数据对应的时间桶。这个时间桶会被维护在timeBucketsMap
内。当时间桶的数量大于reportFreqBucketCount
或者当前时间已经超过下一次需要反馈的时间,就会调用report()
方法。
public boolean report() { ... //清空timeBucketCount Map<Double, TimeBucketMetadata> tempTimeCountsMap; timeBucketsRWLock.writeLock().lock(); try { tempTimeCountsMap = timeBucketsMap; timeBucketCount.set(0); timeBucketsMap = new ConcurrentHashMap<>(); } finally { timeBucketsRWLock.writeLock().unlock(); } try { //构建AuditMessage 并report for (TimeBucketMetadata bucket : tempTimeCountsMap.values()) { AuditMessage m = auditReporter.buildAuditMessage(topicName, bucket); auditReporter.reportAuditMessage(m); } } catch (IOException ioe) { logger.error("IOException when trying to send audit message: {}", ioe.toString()); } nextReportTime.set(setNextReportingTime()); reportingInProgress.set(false); return true; }
report()
内会先记录timeBucketsMap
内所有时间桶的数据,并将timeBucketsMap
清空。 之后会使用KafkaAuditReporter
的buildAuditMessage
构建一个message
,buildAuditMessage
方法如下
public AuditMessage buildAuditMessage(String topicName, TimeBucketMetadata timeBucket) { return new AuditMessage(topicName, timeBucket, hostMetadata); }
构建得到message
后,会调用reportAuditMessage()
方法反馈该条记录,此方法内会向审计的Topic
发送该message
数据
public void reportAuditMessage(final AuditMessage message) throws IOException { // Create a heatpipe-encoded message. final JSONObject auditMsg = new JSONObject(); ...... auditMsg.put(AuditMsgField.UUID.getName(), UUID.randomUUID().toString()); final byte[] outputBytes = auditMsg.toJSONString().getBytes(); // Send message via Kafka producer final ProducerRecord<String, byte[]> data = new ProducerRecord<>(topicForAuditMsg, null, outputBytes); producer.send(data, new Callback() { @Override public void onCompletion(RecordMetadata recordMetadata, Exception e) { if (e != null) { logger.warn("Could not send auditMsg over Kafka for topic {}", message.topicName, e); } else { messageReportRate.mark(message.timeBucketMetadata.msgCount.get()); bucketReportRate.mark(); } } }); }
消息全部反馈完成后,设置下次的反馈时间。ChaperoneClient
大体功能就是这样。
Client
发送的AuditMessage
类维护了一个时间桶,包含桶的开始、结束时间,时间桶内消息的个数等。
阅读全文
0 0
- kafka监控Chaperone Client
- kafka监控
- kafka监控
- kafka-python-client-example
- golang kafka client
- kafka监控Kafka Offset Monitor
- Kafka 消息监控 - Kafka Eagle
- kafka 监控之Mx4jLoader
- Kafka学习之监控
- Kafka监控工具KafkaOffsetMonitor
- 使用JMX监控Kafka
- Kafka 监控调研
- kafka zookeeper 监控
- ganglia监控kafka
- kafka监控工具-kafka_manager
- kafka管理监控插件
- kafka监控-KafkaOffsetMonitor
- Kafka消息系统监控
- PAT 1127. ZigZagging on a Tree (30) 树的构建+特殊遍历序列
- 【找工作】笔试面试题总结——多益网络面试题(人工智能岗)
- 用户从控制台输入一行字符串,程序输出最长的连续字母串的长度和并把它输出。
- freebsd更新+vmware
- URI,URL,URN详解
- kafka监控Chaperone Client
- hihocoder 1175 拓扑排序·二
- JVM的GC
- STM32片上Flash内存映射、页面大小、寄存器映射
- hive的基本使用
- 蚂蚁分类信息系统5.8 解决手机端新闻详情页面图片不自动缩放问题
- 化名-并查集+二分查找
- Python网络编程笔记
- 51nod 1523 非回文(DFS)