1

I am trying to use BodyPix/Tensorflow to blur the background fo my webcam. I'm trying to follow this guide. https://github.com/vinooniv/video-bg-blur . It obviously works for them since they have a live example.

Here's my code:

  ngOnInit(): void {
    const videoElement = document.getElementById('face-blur-video') as HTMLVideoElement
    const canvas = document.getElementById('blur-canvas') as HTMLCanvasElement;
    videoElement.onplaying = () => {
      canvas.height = videoElement.height
      canvas.width = videoElement.width
    }
    const context = canvas.getContext('2d');
    context.canvas.height = 320;
    context.canvas.height = 480;
    

    navigator.mediaDevices.getUserMedia({video: true, audio: true})
    .then(stream => {
      videoElement.srcObject = stream;
      videoElement.play();
      const options: ModelConfig = {
        multiplier: .75,
        outputStride: 16,
        quantBytes: 4,
        architecture: 'MobileNetV1'
  
      }
      tfjs.getBackend();
      BodyPix.load(options).then(bp => this.perform(bp))
    })
  async perform(net: BodyPix.BodyPix) {
    while(true) {
      const videoElement = document.getElementById('face-blur-video') as HTMLVideoElement
      const canvas = document.getElementById('blur-canvas') as HTMLCanvasElement;
      const segmentation = await net.segmentPerson(videoElement)
      const backgroundBlurAmount = 6
      const edgeBlurAmount = 2;
      const flipHorizontal = true;
      BodyPix.drawBokehEffect(canvas,videoElement,segmentation,backgroundBlurAmount,edgeBlurAmount,flipHorizontal)
    }

However, when the application runs I get the following error:

core.js:6162 ERROR Error: Uncaught (in promise): InvalidStateError: Failed to execute 'drawImage' on 'CanvasRenderingContext2D': The image argument is a canvas element with a width or height of 0.
Error: Failed to execute 'drawImage' on 'CanvasRenderingContext2D': The image argument is a canvas element with a width or height of 0.
    at drawWithCompositing (body-pix.esm.js:17)
    at Module.drawBokehEffect (body-pix.esm.js:17)
    at FaceBlurComponent.<anonymous> (face-blur.component.ts:56)
    at Generator.next (<anonymous>)
    at fulfilled (tslib.es6.js:73)
    at ZoneDelegate.invoke (zone-evergreen.js:372)
    at Object.onInvoke (core.js:28510)
    at ZoneDelegate.invoke (zone-evergreen.js:371)
    at Zone.run (zone-evergreen.js:134)
    at zone-evergreen.js:1276
    at resolvePromise (zone-evergreen.js:1213)
    at zone-evergreen.js:1120
    at zone-evergreen.js:1136
    at ZoneDelegate.invoke (zone-evergreen.js:372)
    at Object.onInvoke (core.js:28510)
    at ZoneDelegate.invoke (zone-evergreen.js:371)
    at Zone.run (zone-evergreen.js:134)
    at zone-evergreen.js:1276
    at ZoneDelegate.invokeTask (zone-evergreen.js:406)
    at Object.onInvokeTask (core.js:28497)

I have set the width and height in every way possible. Why is TensorFlow detecting the canvas height/width as zero?

Michael S
  • 166
  • 2
  • 7

3 Answers3

1

So I ended up looking up the Google TensorFlow/BodyPix demo and made some changes. None of the changes should make a difference, but whatever the fix was got everything working. Here's my working code:

import { Component, OnInit } from '@angular/core';
import * as BodyPix from '@tensorflow-models/body-pix';
import * as tfjs from '@tensorflow/tfjs';
import { ModelConfig } from '@tensorflow-models/body-pix/dist/body_pix_model';

@Component({
  selector: 'opentok-face-blur',
  templateUrl: './face-blur.component.html',
  styleUrls: ['./face-blur.component.scss']
})
export class FaceBlurComponent implements OnInit {

  
  videoElement: HTMLVideoElement;
  canvas : HTMLCanvasElement;
  context 
  constructor() { }

  ngOnInit(): void {

    this.videoElement = document.getElementById('face-blur-video') as HTMLVideoElement
    this.canvas = document.getElementById('blur-canvas') as HTMLCanvasElement;
    this.context = this.canvas.getContext('2d');
    this.context.fillStyle = 'black'
    this.context.fillRect(0,0,this.canvas.width,this.canvas.height);

    this.bindPage();

  }


  async setupMedia() {
    const stream = await navigator.mediaDevices.getUserMedia({video: true, audio: false});
    this.videoElement.srcObject = stream;

    this.videoElement.onloadedmetadata = () => {
      this.videoElement.width = this.videoElement.videoWidth;
      this.videoElement.height = this.videoElement.videoHeight;
    }
    await this.videoElement.play();
    return this.videoElement;
  }

  async bindPage() {
    tfjs.getBackend();
    const net = await BodyPix.load({
      multiplier: .75,
      outputStride: 16,
      quantBytes: 4,
      architecture: 'MobileNetV1'
    });
    await this.setupMedia();
    this.segmentBodyInRealTime(net);
  }

  async segmentBodyInRealTime(net: BodyPix.BodyPix) {
    const segmentation = await net.segmentPerson(this.videoElement)
    BodyPix.drawBokehEffect(this.canvas,this.videoElement,segmentation,6,2,true);
    this.segmentBodyInRealTime(net);
    
  }
}

I basically made a bunch of methods async and then called the draw effect recursively. I don't know why this worked, but it did.

Michael S
  • 166
  • 2
  • 7
0

in my case video that I was transfering didn't had width and height, setting this up fix this

   <video
    id="video"
    controls
    loop
    autoplay
    muted
    width="250"
    height="200"
  />
zayn
  • 1
  • Please provide an explanation to your code - it is very hard to understand something when it isn't explained. – ethry Jul 02 '22 at 21:26
0

In my case the video that I was transferring didn't have width and height, setting this up fixed this. Very Nice solution.

<video
id="video"
controls
loop
autoplay
muted
width="250"
height="200"
/>
Born2Smile
  • 2,918
  • 1
  • 18
  • 20
GregHan
  • 1
  • 1