I am writing a spark streaming job in java which takes input record from kafka. Now the record is available in JavaDstream as a custom java object. Sample record is :
TimeSeriesData: {tenant_id='581dd636b5e2ca009328b42b', asset_id='5820870be4b082f136653884', bucket='2016', parameter_id='58218d81e4b082f13665388b', timestamp=Mon Aug 22 14:50:01 IST 2016, window=null, value='11.30168'}
Now I want to aggregate this data based on min, hour, day and week of the field "timestamp".
My question is, how to aggregate JavaDstream records based on a window. A sample code will be helpful.