1

I'm having trouble using the new JS SDK from Affectiva (http://developer.affectiva.com/v3_1/javascript/analyze-frames/), specifically in Frame Detector mode. I had no trouble getting the CameraFeed version up & running, they even have a nice example on JSFiddle (https://jsfiddle.net/affectiva/opyh5e8d/show/). But the Frame Detector mode just gives me hundreds of "runtime errors" from the Web Worker.

<body class='session'>
  <div class='col-md-8' id='affdex_elements' style='width:680px;height:480px;'>
    <video autoplay id='video'></video>
    <canvas id='canvas'></canvas>
  </div>
  <div id='results' style='word-wrap:break-word;'></div>
  <div id='logs'></div>

  <script src="https://download.affectiva.com/js/3.1/affdex.js"></script>
  <script>
    var width = 640;
    var height = 480;

    var faceMode = affdex.FaceDetectorMode.LARGE_FACES;
    var detector = new affdex.FrameDetector(faceMode);

  detector.addEventListener("onInitializeSuccess", function() {
    console.log('Detector reports initialized.');

    // Start with first capture...
    captureImage();
  });

  detector.addEventListener("onImageResultsSuccess", function (faces, image, timestamp) {
    console.log( faces );
    captureImage();
  });

  detector.addEventListener("onImageResultsFailure", function (image, timestamp, err_detail) {
    console.log( err_detail );
    captureImage();
  });

  detector.detectAllExpressions();
  detector.detectAllEmotions();
  detector.detectAllEmojis();
  detector.detectAllAppearance();

  detector.start();

  var v = document.getElementById('video');
  var c = document.getElementById('canvas');
  var t = c.getContext('2d');
  c.width = width;
  c.height = height;  

  navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia ||  
    navigator.mozGetUserMedia || navigator.msGetUserMedia || navigator.oGetUserMedia;

  if (navigator.getUserMedia) {       
      navigator.getUserMedia({video: true}, handleVideo, videoError);
  }
  function handleVideo(stream) { v.src = window.URL.createObjectURL(stream); }
  function videoError(e) { console.log( e ); }

  function captureImage() {
    console.log('Capturing...');
    t.clearRect( 0, 0, c.width, c.height );
    t.drawImage( v, 0, 0, width, height );
    var imgData = t.getImageData(0, 0, c.width, c.height);
    var currentTimeStamp = ( new Date() ).getTime() / 1000;
    detector.process( imgData, currentTimeStamp );
  }
</script>

I've removed anything non-essential just to get to a trivial working example. Again, I have no problem running the CameraFeed version of this. It's just this one that's not working. Am I missing something silly? Documentation is a little light...

Nuby
  • 2,378
  • 2
  • 21
  • 26

1 Answers1

4

Internally, the timestamp gets converted into an integer for storage. I think u might be running into an integer overflow. Can you cache the initial timestamp, and subtract the it from the subsequent timestamps such that the first timestamp passed into the process() is 0 and then the subsequent values increase in magnitude.

ahamino
  • 634
  • 1
  • 4
  • 12
  • Good thought. That's how they do it in their code, I thought it was purely for internal consistency rather than overflow issues. I'll try it and report back. – Nuby Aug 15 '16 at 12:13
  • I can't believe that worked! You have no idea how long I've been trying to trace that issue... I would have never figured it out without you. Thanks! – Nuby Aug 15 '16 at 12:14