0

I am using Scala and Flink. I receive data from Kafka(for cpu_utilization) and able to read it. Now since I don't want to persist all data in db I am filtering required rows like cpu_utilization should be more than 10%. I am doing this by converting datastream to a table and in this step I also give the table a schema using the following : tableEnv.fromDataStream(datastream).as(columnNamesAsArray.mkString(Constants.COMMA)). Then I apply the query and convert it back to datastream. In the last step(converting back to datastream) the schema is lost. How can I prevent that loss. I am saying this because when I see each row in debug mode, the fieldByName attribute is null

Kush Rohra
  • 15
  • 5

0 Answers0