0

I have the following snippet:

groupedStream.windowedBy(SessionWindows.with(Duration.ofSeconds(config.joinWindowSeconds)).grace(Duration.ZERO));

KTable<byte[], byte[]> mergedTable =
        groupedStream
            .reduce((aggregateValue, newValue) -> {
              try {
                Map<String, String> recentMap = MAPPER.readValue(new String(newValue), HashMap.class);
                Map<String, String> aggregateMap = MAPPER.readValue(new String(newValue), HashMap.class);
                aggregateMap.forEach(recentMap::putIfAbsent);
                newValue = MAPPER.writeValueAsString(recentMap).getBytes();
              } catch (Exception e) {
                LOG.warn("Couldn't aggregate key grouped stream\n", e);
              }
              return newValue;
            }, Materialized.with(Serdes.ByteArray(), Serdes.ByteArray()))
            .suppress(Suppressed.untilWindowCloses(unbounded()));

I am getting the following compilation exception:

Error:(164, 63) java: incompatible types: org.apache.kafka.streams.kstream.Suppressed<org.apache.kafka.streams.kstream.Windowed> cannot be converted to org.apache.kafka.streams.kstream.Suppressed<? super byte[]>

I know that if I inline the windowedBy like so:

        KTable<Windowed<byte[]>, byte[]> mergedTable =
                groupedStream
                        .windowedBy(SessionWindows.with(Duration.ofSeconds(config.joinWindowSeconds)).grace(Duration.ZERO))
                        .reduce((aggregateValue, newValue) -> {
                            try {
                                Map<String, String> recentMap = MAPPER.readValue(new String(newValue), HashMap.class);
                                Map<String, String> aggregateMap = MAPPER.readValue(new String(newValue), HashMap.class);
                                aggregateMap.forEach(recentMap::putIfAbsent);
                                newValue = MAPPER.writeValueAsString(recentMap).getBytes();
                            } catch (Exception e) {
                                LOG.warn("Couldn't aggregate key grouped stream\n", e);
                            }
                            return newValue;
                        }, Materialized.with(Serdes.ByteArray(), Serdes.ByteArray()))
                        .suppress(Suppressed.untilWindowCloses(unbounded()));

It works, but I am not sure how to separate and split those two...

QuirkyBit
  • 654
  • 7
  • 20

1 Answers1

2

there are two issues here.

The first issue is that KGroupedStream.windowedBy(SessionWindows) returns an instance of a SessionWindowedKStream<K, V> and in your first example

groupedStream.windowedBy(SessionWindows.with(Duration.ofSeconds(config.joinWindowSeconds)).grace(Duration.ZERO));

You are not capturing the returned SessionWindowedKStream in a variable.

The second issue is in your first code example you have

KTable<byte[], byte[]> mergedTable

when it should be

KTable<Windowed<byte[]>, byte[]> mergedTable

as it is in your second example.

If you change the code to

SessionWindowedKStream<byte[], byte[]> sessionWindowedKStream = groupedStream.windowedBy(SessionWindows.with(Duration.ofSeconds(config.joinWindowSeconds)).grace(Duration.ZERO));

KTable<Windowed<byte[]>, byte[]> mergedTable = 
      sessionWindowedKStream
                .reduce((aggregateValue, newValue) -> {...

Then it should compile fine.

HTH Bill

bbejeck
  • 1,310
  • 8
  • 7
  • The solution works, and it does work. I wasn't sure if maybe the serdes were wrong because when I tried it previously, before trying inlined solution, I was getting the following problem for some reason: https://stackoverflow.com/questions/61883482/kafka-streams-api-session-window-exception – QuirkyBit May 20 '20 at 14:28
  • I read your comment, I'm not sure, but I'll take a look into it. – bbejeck May 21 '20 at 19:07
  • I was also speculating that normally stores its state in "a file somewhere", and once the right kind of WindowedStore (or some SerDe somewhere is not serialized properly) is set and persisted, it doesn't throw an error anymore. – QuirkyBit May 21 '20 at 22:44