All the implementation i saw in optical flow in opencv uses video as an array of frames and then implement optical flow on each image. That involves slicing the image into NxN block and searching for velocity vector.
Although motion vector in video codec is misleading and it does not necessarily contain motion information, why don't we use it to check which block likely has motion and then run optical flow on those blocks ? Shouldn't that fasten the process ?