I'm learning TensorFlow, and I finished making a model that can predict handwritten digits using the MNIST data set. And it runs great when I test the model in my Python environment.
After making the model, I wanted to try making a web application with it, but for some reason when I make my predictions on the web application, my imported model keeps making the same incorrect prediction.
This is the code that I use for the drawing portion in my application:
export default function Draw() {
const [guess, setGuess] = useState();
const canvasRef = useRef(null);
const contextRef = useRef(null);
const [isDrawing, setIsDrawing] = useState(false);
useEffect(() => {
const canvas = canvasRef.current;
canvas.width = window.innerWidth / 2;
canvas.height = window.innerHeight;
canvas.style.width = `${window.innerWidth / 4}px`;
canvas.style.height = `${window.innerHeight / 2}px`;
const context = canvas.getContext('2d');
context.scale(2, 2);
context.lineCap = 'round';
context.strokeStyle = 'black';
context.lineWidth = 20;
contextRef.current = context;
}, []);
useEffect(() => {}, [guess]);
const startDrawing = ({ nativeEvent }) => {
const { offsetX, offsetY } = nativeEvent;
contextRef.current.beginPath();
contextRef.current.moveTo(offsetX, offsetY);
setIsDrawing(true);
};
const finishDrawing = () => {
contextRef.current.closePath();
setIsDrawing(false);
};
const draw = ({ nativeEvent }) => {
if (!isDrawing) {
return;
}
const { offsetX, offsetY } = nativeEvent;
contextRef.current.lineTo(offsetX, offsetY);
contextRef.current.stroke();
};
const reset = () => {
console.log('hello');
const canvas = canvasRef.current;
const context = canvas.getContext('2d');
context.clearRect(0, 0, canvas.width, canvas.height);
};
return (
<div>
<div>
<canvas
className={styles.draw}
onMouseDown={startDrawing}
onMouseUp={finishDrawing}
onMouseMove={draw}
ref={canvasRef}
/>
</div>
<h2 className={styles.description}>
I believe the number you drew was: {guess}
</h2>
<button onClick={evaluate}>Evaluate</button>
<button onClick={reset}>Clear</button>
</div>
);
}
And then this is the handler function which passes the image drawn to my model:
const evaluate = async () => {
// Import model
const model = await tf.loadLayersModel(
'https://raw.githubusercontent.com/Bonzaii1/NumericPrediction/main/Model/model.json'
);
// Set image to a variable
const canvas = canvasRef.current;
let image = new Image(28, 28);
var resizedCanvas = document.createElement('canvas');
var resizedContext = resizedCanvas.getContext('2d');
resizedCanvas.height = '28';
resizedCanvas.width = '28';
resizedContext.drawImage(canvas, 0, 0, 28, 28);
image = resizedCanvas.toDataURL('image.png');
window.location.href = image;
// Turn the image to a tensor
let tfTensor = tf.browser.fromPixels(resizedCanvas, 1);
tfTensor = tfTensor.squeeze();
tfTensor = tfTensor.div(255.0);
tfTensor = tfTensor.expandDims(0);
tfTensor = tfTensor.cast('float32');
console.log(tfTensor.shape);
// Run the image through the model
let result = model.predict(tfTensor);
const index = result.as1D().argMax().dataSync()[0];
result.print();
setGuess(index);
};
I’ve tried resizing the image a bunch of different times and methods, but it doesn’t work. It keeps throwing the same incorrect prediction for a bunch of different attempts.
The image my model expects is a (28, 28) input. So I’ve tried manipulating it so its like that and when I download it, the image seems fine in size and display. But why does my model keep making the same thing?