0

Can I upsert data, which is avro schema in Kafka?

I want to pick record from topic and then, filter the flights (eg: consider two records have same flight number. We need to pick only latest one by considering time stamp as mentioned in Avro schema

How can I do this I want to remove duplicates of same flight number

{ "FlightNumber" : 1, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Scheduled", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "latest one" }
{ "FlightNumber" : 2, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Delayed", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 3, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Scheduled", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 4, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Scheduled", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 5, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Ontime", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 1, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Scheduled", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "oldsomething random" }

Output stream should be like,

{ "FlightNumber" : 1, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Delayed", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "latest one" }
{ "FlightNumber" : 2, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Delayed", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 3, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Scheduled", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 4, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Scheduled", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
{ "FlightNumber" : 5, "OriginAirport" : "BOM", "DestinationAirport" : "DEL", "OriginDate" : "2020-07-26", "OriginTime" : "11:00", "DestinationDate" : "2020-07-26", "DestinationTime" :  "11:00:00", "FlightStatus" : "Ontime", "GateIn" : "IN", "GateOut" : "Out", "RecordDateTime" : "qwer" }
 builder.stream(inputTopic, Consumed.with(Serdes.String(), flightDataSerde))
    
        .map((k, v) -> new KeyValue<>((String) v.getFlightStatus(), (Integer) v.getFlightNumber()))
    
        .groupByKey(Grouped.with(Serdes.String(), Serdes.Integer()))
        // Apply COUNT method
      .count()
        // Write to stream specified by outputTopic
        .toStream().to(outputTopic, Produced.with(Serdes.String(), Serdes.Long()));

Avro:

  "namespace": "io.confluent.developer.avro",
  "type": "record",
  "name": "FlightData",
  "fields": [
    {"name": "FlightNumber", "type": "int"},
    {"name": "OriginAirport", "type": "string"},
    {"name": "DestinationAirport", "type": "string"},
        {"name": "OriginDate", "type": "string"},
        {"name": "OriginTime", "type": "string"},
        {"name": "DestinationDate", "type": "string"},
        {"name": "DestinationTime", "type": "string"},
        {"name": "FlightStatus", "type": "string"},

        {"name": "GateOut", "type": "string"},
        {"name": "GateIn", "type": "string"},
        {"name": "RecordDateTime", "type": "string"}
  ]
}
OneCricketeer
  • 179,855
  • 19
  • 132
  • 245
  • Your code is just counting keys and not using Avro as output... Perhaps show your attempts at what you're actually looking to do? – OneCricketeer Jul 27 '20 at 13:57
  • I am adding something to the question input and expected output – subrahmanyam b Jul 28 '20 at 06:48
  • 2
    Have you tried windowing? https://kafka.apache.org/20/documentation/streams/developer-guide/dsl-api.html#windowing A Stream cannot remove duplicates, but a table will only show the latest event, by key – OneCricketeer Jul 28 '20 at 14:42

2 Answers2

0

The main problem you need to address is, for how long do you want to wait before you emit a result record. When you get the first record, you don't know if you can emit it right away, of if there might be a duplicate later (with larger or smaller timestamp).

Thus, you need to define some window and use an aggregation that keeps only one record per key and per window. In this aggregation, you can compare the timestamps and only keep the desired record.

After the aggregation, you can use suppress() to only emit a single final result record when the window closes.

Matthias J. Sax
  • 59,682
  • 7
  • 117
  • 137
-1

by considering time stamp as mentioned in Avro schema

This is what the TimestampExtractor interface is for. Otherwise, you could adjust your upstream producer to make that timestamp the actual record timestamp

two records have same flight number. We need to pick only latest one

This is the default behavior for ordered records of the same key arriving into the source topic. You'll want to consider logic to handle late arriving data, though, and skip any data with later timestamps. This can be done with the Processor API easier than the Streams DSL, which you'll need to use anyway to get access to check against the table contents

OneCricketeer
  • 179,855
  • 19
  • 132
  • 245