0

I'm using Tensorflow js in react native and I'm getting the correct predictions for my model but it takes a lot of time to give results. For eg I'm using a custom model created by me in teachable machine by Google. But the .datasync() takes time approx. 1 second whole to give results. This causes a physical lag in the camera I want to get results instantly. This is my code below: -

<TensorCamera
          style={styles.camera}
          flashMode={Camera.Constants.FlashMode.off}
          type={Camera.Constants.Type.back}
          resizeWidth={224}
          resizeHeight={224}
          resizeDepth={3}
          onReady={handleCameraStream}
          autorender={true}
        />
//
const handleCameraStream = (imageAsTensors) => {
    try {
    } catch (e) {
      // console.log("Tensor 1 not found!");
    }
    const loop = async () => {
      // && detected == true
      if (model !== null) {
        if (frameCount % makePredictionsEveryNFrames === 0) {
          const imageTensor = imageAsTensors.next().value;
          await getPrediction(imageTensor);
          // .catch(e => console.log(e));
        }
      }

      frameCount += 1;
      frameCount = frameCount % makePredictionsEveryNFrames;
      requestAnimationFrameId = requestAnimationFrame(loop);
    };
    loop();

    //loop infinitely to constantly make predictions
  };
//
const getPrediction = async (tensor) => {
    // if (!videoLink) {
    if (!tensor) {
      console.log("Tensor not found!");
      return;
    }
    //
    const imageData2 = tensor.resizeBilinear([224, 224]);
    // tf.image.resizeBilinear(tensor, [224, 224]);
    const normalized = imageData2.cast("float32").div(127.5).sub(1);
    const final = tf.expandDims(normalized, 0);
    //
    console.time();
    const prediction = model.predict(final).dataSync();
   
    console.timeEnd();
    console.log("Predictions:", prediction);
}

I heard about using .data() instead of .datasync() but I don't know how to implement .data() in my current code. please help.

Kirill Novikov
  • 2,576
  • 4
  • 20
  • 33
Giga Chad
  • 1
  • 1

1 Answers1

0

predict is what takes time - and that is really up to your model
maybe it can run faster on different backend (no idea which backend you're using, default for browsers would be webgl), but in reality it is what it is without rearchitecting the model items.

datasync simply downloads results from wherever tensors are (e.g. in gpu vram) to your variable in js.

yes, you could use data instead which is an async call, but difference is couple of ms at best - its not going to speed up model execution at all.

btw, you're not releasing tensors anywhere - your application has some serious memory leaks.

Vladimir Mandic
  • 813
  • 5
  • 11
  • my model is not even big tho . im currently using webgl as backend as others were slower in comparison . can you tell me where to release tensors like you mentioned ? can you edit my code above and tell me where to release tensors ? also im using this in my react native app not a browser . how to rearchitect the model items as you mentioned above ? I made a model with 4 classes to detect with googles teachable machine platform I can share my model.json file with you if you want and my model.weights.bin file is around 2.2mb . pls help bro its urgent . – Giga Chad Dec 31 '22 at 16:05
  • regarding tensor deallocation, i've provided examples in different posts - editing your code is not what i'd do here. regarding model reachitecture, that is a massive topic - and makes no sense to talk about it if your model is a simple model that you haven't even designed, you just created it using teachable machine. and no wonder its slow as teachable machine produces anything but efficient models. – Vladimir Mandic Dec 31 '22 at 18:26
  • can you please tell me what else can I do in order to fix this issue ? how to model that is fast/efficient ? – Giga Chad Jan 02 '23 at 10:39
  • how to create an efficient model is a science, not something than be done in a quick post or comment. start with some easy examples – Vladimir Mandic Jan 02 '23 at 11:49