1

I am trying to create a KStream application in Eclipse using Java. right now I am referring to the word count program available on the internet for KStreams and modifying it.

What I want is that the data that I am reading from the input topic should be written to a file instead of being written to another output topic.

But when I am trying to print the KStream/KTable to the local file, I am getting the following entry in the output file:

org.apache.kafka.streams.kstream.internals.KStreamImpl@4c203ea1

How do I implement redirecting the output from the KStream to a file?

Below is the code:

package KStreamDemo.kafkatest;

package org.apache.kafka.streams.examples.wordcount;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.KTable;
import org.apache.kafka.streams.kstream.KeyValueMapper;
import org.apache.kafka.streams.kstream.Produced;
import org.apache.kafka.streams.kstream.ValueMapper;

import java.util.Arrays;
import java.util.Locale;
import java.util.Properties;
import java.util.concurrent.CountDownLatch;
public class TemperatureDemo {
public static void main(String[] args) throws Exception {
    Properties props = new Properties();
    props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streams-wordcount");
    props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "34.73.184.104:9092");
    props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
    props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
    props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
    System.out.println("#1###################################################################################################################################################################################");
    // setting offset reset to earliest so that we can re-run the demo code with the same pre-loaded data
    // Note: To re-run the demo, you need to use the offset reset tool:
    // https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Application+Reset+Tool
    props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

    StreamsBuilder builder = new StreamsBuilder();
    System.out.println("#2###################################################################################################################################################################################");
    KStream<String, String> source = builder.stream("iot-temperature");
    System.out.println("#5###################################################################################################################################################################################");
    KTable<String, Long> counts = source
        .flatMapValues(new ValueMapper<String, Iterable<String>>() {
            @Override
            public Iterable<String> apply(String value) {
                return Arrays.asList(value.toLowerCase(Locale.getDefault()).split(" "));
            }
        })
        .groupBy(new KeyValueMapper<String, String, String>() {
            @Override
            public String apply(String key, String value) {
                return value;
            }
        })
        .count();
    System.out.println("#3###################################################################################################################################################################################");
    System.out.println("OUTPUT:"+ counts);
    System.out.println("#4###################################################################################################################################################################################");
    // need to override value serde to Long type
    counts.toStream().to("iot-temperature-max", Produced.with(Serdes.String(), Serdes.Long()));

    final KafkaStreams streams = new KafkaStreams(builder.build(), props);
    final CountDownLatch latch = new CountDownLatch(1);

    // attach shutdown handler to catch control-c
    Runtime.getRuntime().addShutdownHook(new Thread("streams-wordcount-shutdown-hook") {
        @Override
        public void run() {
            streams.close();
            latch.countDown();
        }
    });

    try {
        streams.start();
        latch.await();
    } catch (Throwable e) {
        System.exit(1);
    }
    System.exit(0);
}

}

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
dijeah
  • 303
  • 2
  • 13
  • 1
    Why do you want to write it to a file? Usually applications would consume from the topic directly. A file introduces all sorts of problems that you don't want, and Kafka topics can be consumed a variety of ways including native APIs, REST API, and so on. – Robin Moffatt Mar 08 '19 at 16:20

1 Answers1

1

This is not correct

System.out.println("OUTPUT:"+ counts);

You would need to do counts.foreach, then print the messages out to a file.

Print Kafka Stream Input out to console? (just update to write to file instead)


However, probably better to write out the stream to a topic. And the use Kafka Connect to write out to a file. This is a more industry-standard pattern. Kafka Streams is encouraged to only move data between topics within Kafka, not integrate with external systems (or filesystems)

Edit connect-file-sink.properties with the topic information you want, then

bin/connect-standalone config/connect-file-sink.properties
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • 1
    Thanks. However, my current use case is to split the input JSON data by keys and write it to different files based on the keys. I would not prefer writing a custom Connect , but would rather implement it via the KStreams code. – dijeah Mar 25 '19 at 08:13