I'm experimenting with processing live video feeds using TensorFlow.js.
I'm using something like the following based off other examples I've seen:
while(true) {
const results = await model.classify(videoElem);
console.log(results);
await tf.nextFrame();
}
I'm trying to understand exactly what tf.nextFrame()
does.
I'm thinking that when I'm running model.classify(videoElem)
it takes a single frame from the video stream and processes it with the model.
I imagine there are two main scenarios:
- The video produces frames faster than JavaScript is classifying them.
- JavaScript is processing frames faster than they are produced.
Is the tf.nextFrame()
method something to handle scenario #2, so that a single frame is never processed twice?
The documentation describes it by saying:
Returns a promise that resolve when a requestAnimationFrame has completed.
This is simply a sugar method so that users can do the following: await tf.nextFrame();
I'm having trouble interpreting what that means. Can anyone confirm if what I described is what tf.nextFrame()
does? If my interpretation is wrong than what exactly does tf.nextFrame()
do?