I have a Neural network (NN) in deeplearning4j trained with MNIST to recognise digits on an image. As MNIST set contains 28x28 pixel images, I am able to predict class of an 28x28 image using this NN.
I am trying to find out how to apply this NN on a picture of a handwritten page? How to convert text from that image to actual text (OCR)? Basically, What kind of preprocessing is needed and how to find out part of image where the text is? How to derive smaller pieces of that image to apply NN individually?