I am newbie in Spark Streaming and for a job my input is from kafka and I process that input and after processing I save one DStream(say "history" stream) which I got after filtering final DStream after processing. I cache this DStream then and in the next batch union this DStream with my input DStream and perform same processing as earlier , when I obsereved that till the time union the "history" DStream is empty.Kindly tell a way resolve this problem.
Asked
Active
Viewed 29 times
0
-
could you add the code to the question to make it clear? Also, next to 'what you are doing', also clarify 'what you are trying to achieve' – maasg Jun 08 '17 at 07:39
-
Hey I have posted the same question with whole code this time and given possible explaination about what I want to achieve , https://stackoverflow.com/questions/44434675/not-able-to-persist-the-dstream-for-use-in-further-future-use – JSR29 Jun 08 '17 at 11:39
-
1Possible duplicate of [Not able to persist the DStream for use in next batch](https://stackoverflow.com/questions/44434675/not-able-to-persist-the-dstream-for-use-in-next-batch) – maasg Jun 09 '17 at 21:08