Kafka-[3]-KafkaStream

来源:互联网 发布:mac jdk1.6下载 编辑:程序博客网 时间:2024/06/14 03:50

Step 8: Use Kafka Streams to process data

Kafka Streams is a client library of Kafka for real-time stream processing and analyzing data stored in Kafka brokers. 

Producer API

The Producer API allows applications to send streams of data to topics in the Kafka cluster.

Examples showing how to use the producer are given in the javadocs. 

To use the producer, you can use the following maven dependency:

<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>0.10.2.0</version></dependency>

Consumer API

The Consumer API allows applications to read streams of data from topics in the Kafka cluster.

Examples showing how to use the consumer are given in the javadocs.

To use the consumer, you can use the following maven dependency:

<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-clients</artifactId><version>0.10.2.0</version></dependency> 

Streams API

The Streams API allows transforming streams of data from input topics to output topics.

Examples showing how to use this library are given in the javadocs

Additional documentation on using the Streams API is available here.

To use Kafka Streams you can use the following maven dependency 

<dependency><groupId>org.apache.kafka</groupId><artifactId>kafka-streams</artifactId><version>0.10.2.0</version></dependency>

Connect API

The Connect API allows implementing connectors that continually pull from some source data system into Kafka or push from Kafka into some sink data system.

Many users of Connect won't need to use this API directly, though, they can use pre-built connectors without needing to write any code. Additional information on using Connect is available here.

Those who want to implement custom connectors can see the javadoc.


/** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor license agreements.  See the NOTICE file distributed with * this work for additional information regarding copyright ownership. * The ASF licenses this file to You under the Apache License, Version 2.0 * (the "License"); you may not use this file except in compliance with * the License.  You may obtain a copy of the License at * *    http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package streams.examples.wordcount; import org.apache.kafka.clients.consumer.ConsumerConfig;import org.apache.kafka.common.serialization.Serdes;import org.apache.kafka.streams.KafkaStreams;import org.apache.kafka.streams.KeyValue;import org.apache.kafka.streams.StreamsConfig;import org.apache.kafka.streams.kstream.KStreamBuilder;import org.apache.kafka.streams.kstream.KStream;import org.apache.kafka.streams.kstream.KTable;import org.apache.kafka.streams.kstream.KeyValueMapper;import org.apache.kafka.streams.kstream.ValueMapper; import java.util.Arrays;import java.util.Locale;import java.util.Properties; /** * Demonstrates, using the high-level KStream DSL, how to implement the WordCount program * that computes a simple word occurrence histogram from an input text. * * In this example, the input stream reads from a topic named "streams-file-input", where the values of messages * represent lines of text; and the histogram output is written to topic "streams-wordcount-output" where each record * is an updated count of a single word. * * Before running this example you must create the input topic and the output topic (e.g. via * bin/kafka-topics.sh --create ...), and write some data to the input topic (e.g. via * bin/kafka-console-producer.sh). Otherwise you won't see any data arriving in the output topic. */public class WordCountDemo {     public static void main(String[] args) throws Exception {        Properties props = new Properties();        props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");        props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");        props.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());        props.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());         // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data        // Note: To re-run the demo, you need to use the offset reset tool:        // https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Application+Reset+Tool        props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");         KStreamBuilder builder = new KStreamBuilder();         KStream<String, String> source = builder.stream("streams-file-input");         KTable<String, Long> counts = source                .flatMapValues(new ValueMapper<String, Iterable<String>>() {                    @Override                    public Iterable<String> apply(String value) {                        return Arrays.asList(value.toLowerCase(Locale.getDefault()).split(" "));                    }                }).map(new KeyValueMapper<String, String, KeyValue<String, String>>() {                    @Override                    public KeyValue<String, String> apply(String key, String value) {                        return new KeyValue<>(value, value);                    }                })                .groupByKey()                .count("Counts");         // need to override value serde to Long type        counts.to(Serdes.String(), Serdes.Long(), "streams-wordcount-output");         KafkaStreams streams = new KafkaStreams(builder, props);        streams.start();         // usually the stream application would be running forever,        // in this example we just let it run for some time and stop since the input data is finite.        Thread.sleep(5000L);         streams.close();    }}

It implements the WordCount algorithm, which computes a word occurrence histogram from the input text. However, unlike other WordCount examples you might have seen before that operate on bounded data, the WordCount demo application behaves slightly differently because it is designed to operate on an infinite, unbounded stream of data. Similar to the bounded variant, it is a stateful algorithm that tracks and updates the counts of words. However, since it must assume potentially unbounded input data, it will periodically output its current state and results while continuing to process more data because it cannot know when it has processed "all" the input data.

As the first step, we will prepare input data to a Kafka topic, which will subsequently be processed by a Kafka Streams application.

> echo -e "all streams lead to kafka\nhello kafka streams\njoin kafka summit" > file-input.txt

Next, we send this input data to the input topic named streams-file-input using the console producer, which reads the data from STDIN line-by-line, and publishes each line as a separate Kafka message with null key and value encoded a string to the topic (in practice, stream data will likely be flowing continuously into Kafka where the application will be up and running):

> bin/kafka-topics.sh --create \            --zookeeper localhost:2181 \            --replication-factor 1 \            --partitions 1 \            --topic streams-file-input
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic streams-file-input < file-input.txt

We can now run the WordCount demo application to process the input data:

> bin/kafka-run-class.sh org.apache.kafka.streams.examples.wordcount.WordCountDemo

The demo application will read from the input topic streams-file-input, perform the computations of the WordCount algorithm on each of the read messages, and continuously write its current results to the output topic streams-wordcount-output. Hence there won't be any STDOUT output except log entries as the results are written back into in Kafka. The demo will run for a few seconds and then, unlike typical stream processing applications, terminate automatically.

We can now inspect the output of the WordCount demo application by reading from its output topic:

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 \            --topic streams-wordcount-output \            --from-beginning \            --formatter kafka.tools.DefaultMessageFormatter \            --property print.key=true \            --property print.value=true \            --property key.deserializer=org.apache.kafka.common.serialization.StringDeserializer \            --property value.deserializer=org.apache.kafka.common.serialization.LongDeserializer

with the following output data being printed to the console:

all     1lead    1to      1hello   1streams 2join    1kafka   3summit  1

Here, the first column is the Kafka message key in java.lang.String format, and the second column is the message value in java.lang.Long format. Note that the output is actually a continuous stream of updates, where each data record (i.e. each line in the original output above) is an updated count of a single word, aka record key such as "kafka". For multiple records with the same key, each later record is an update of the previous one.

The two diagrams below illustrate what is essentially happening behind the scenes. The first column shows the evolution of the current state of the KTable<String, Long> that is counting word occurrences for count. The second column shows the change records that result from state updates to the KTable and that are being sent to the output Kafka topic streams-wordcount-output.



First the text line “all streams lead to kafka” is being processed. The KTable is being built up as each new word results in a new table entry (highlighted with a green background), and a corresponding change record is sent to the downstream KStream.

When the second text line “hello kafka streams” is processed, we observe, for the first time, that existing entries in the KTable are being updated (here: for the words “kafka” and for “streams”). And again, change records are being sent to the output topic.

And so on (we skip the illustration of how the third line is being processed). This explains why the output topic has the contents we showed above, because it contains the full record of changes.

Looking beyond the scope of this concrete example, what Kafka Streams is doing here is to leverage the duality between a table and a changelog stream (here: table = the KTable, changelog stream = the downstream KStream): you can publish every change of the table to a stream, and if you consume the entire changelog stream from beginning to end, you can reconstruct the contents of the table.

Now you can write more input messages to the streams-file-input topic and observe additional messages added to streams-wordcount-output topic, reflecting updated word counts (e.g., using the console producer and the console consumer, as described above).

You can stop the console consumer via Ctrl-C.


0 0
原创粉丝点击