23

I found examples/image_ocr.py which seems to for OCR. Hence it should be possible to give the model an image and receive text. However, I have no idea how to do so. How do I feed the model with a new image? Which kind of preprocessing is necessary?

What I did

Installing the depencencies:

  • Install cairocffi: sudo apt-get install python-cairocffi
  • Install editdistance: sudo -H pip install editdistance
  • Change train to return the model and save the trained model.
  • Run the script to train the model.

Now I have a model.h5. What's next?

See https://github.com/MartinThoma/algorithms/tree/master/ML/ocr/keras for my current code. I know how to load the model (see below) and this seems to work. The problem is that I don't know how to feed new scans of images with text to the model.

Related side questions

  • What is CTC? Connectionist Temporal Classification?
  • Are there algorithms which reliably detect the rotation of a document?
  • Are there algorithms which reliably detect lines / text blocks / tables / images (hence make a reasonable segmentation)? I guess edge detection with smoothing and line-wise histograms already works reasonably well for that?

What I tried

#!/usr/bin/env python

from keras import backend as K
import keras
from keras.models import load_model
import os

from image_ocr import ctc_lambda_func, create_model, TextImageGenerator
from keras.layers import Lambda
from keras.utils.data_utils import get_file
import scipy.ndimage
import numpy

img_h = 64
img_w = 512
pool_size = 2
words_per_epoch = 16000
val_split = 0.2
val_words = int(words_per_epoch * (val_split))
if K.image_data_format() == 'channels_first':
    input_shape = (1, img_w, img_h)
else:
    input_shape = (img_w, img_h, 1)

fdir = os.path.dirname(get_file('wordlists.tgz',
                                origin='http://www.mythic-ai.com/datasets/wordlists.tgz', untar=True))

img_gen = TextImageGenerator(monogram_file=os.path.join(fdir, 'wordlist_mono_clean.txt'),
                             bigram_file=os.path.join(fdir, 'wordlist_bi_clean.txt'),
                             minibatch_size=32,
                             img_w=img_w,
                             img_h=img_h,
                             downsample_factor=(pool_size ** 2),
                             val_split=words_per_epoch - val_words
                             )
print("Input shape: {}".format(input_shape))
model, _, _ = create_model(input_shape, img_gen, pool_size, img_w, img_h)

model.load_weights("my_model.h5")

x = scipy.ndimage.imread('example.png', mode='L').transpose()
x = x.reshape(x.shape + (1,))

# Does not work
print(model.predict(x))

this gives

2017-07-05 22:07:58.695665: I tensorflow/core/common_runtime/gpu/gpu_device.cc:996] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX TITAN Black, pci bus id: 0000:01:00.0)
Traceback (most recent call last):
  File "eval_example.py", line 45, in <module>
    print(model.predict(x))
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 1567, in predict
    check_batch_axis=False)
  File "/usr/local/lib/python2.7/dist-packages/keras/engine/training.py", line 106, in _standardize_input_data
    'Found: array with shape ' + str(data.shape))
ValueError: The model expects 4 arrays, but only received one array. Found: array with shape (512, 64, 1)
Martin Thoma
  • 124,992
  • 159
  • 614
  • 958
  • 1
    How about using the function defined at line 478? You can use it as `prediction = test_func(input_data)`. Let me know if it helps, I can add a formal answer to create this. You can also use `model.predict` as it is used for this purpose only. – devil in the detail Jul 05 '17 at 09:04
  • I see you just edited the question. I will post the one I am currently writing and we can comment on it afterwards – DarkCygnus Jul 05 '17 at 20:13
  • @MartinThoma ok, I posted an answer explaining in detail what you can do. Also how to correctly obtain the classification of an input – DarkCygnus Jul 05 '17 at 20:51
  • @MartinThoma edited the question regarding the exception you get... I wonder why the downvote as it is a thorough answer – DarkCygnus Jul 05 '17 at 21:01
  • @devilinthedetail Nice! I think this might be the way to go. At least I get a matrix of shape `(1, 128, 28)` from that back. Open questions are still (1) which size could images have? (2) If I have a scanned document (e.g. 2000px x 1000px) how could this be applied? (3) What exactly does each of the dimensions which the model gives me stand for? How do I get the most likely hypothesis of the content of the image from that? – Martin Thoma Jul 05 '17 at 21:22
  • @MartinThoma These should be straight forward to see. My comments are: 1) I think you are asking for input image so it will be of size (1,img_w,img_h). 2) The model can't take this scanned image as input as your network take images of (img_w,img_h) as input. You have to resize the input image to make it work with this model. Otherwise a model with different size image. 3) As I can see at line 457), the output size is `img_gen.get_output_size()`. Hence the output is `28` in size. You are getting `(1,128,28)` because of this reason only. – devil in the detail Jul 07 '17 at 08:36
  • @MartinThoma It is not clear from where you got the model and your script doesn't look complete to me hence it is difficult to comment on the output size. I feels that `128` is the sequence size and 1 is the batch size. You can train your own model as it has provided all required functions for training. Let me know if you need more details, I will clarifies these further or as an answer. – devil in the detail Jul 07 '17 at 08:46

4 Answers4

7

Well, I will try to answer everything you asked here:

As commented in the OCR code, Keras doesn't support losses with multiple parameters, so it calculated the NN loss in a lambda layer. What does this mean in this case?

The neural network may look confusing because it is using 4 inputs ([input_data, labels, input_length, label_length]) and loss_out as output. Besides input_data, everything else is information used only for calculating the loss, it means it is only used for training. We desire something like in line 468 of the original code:

Model(inputs=input_data, outputs=y_pred).summary()

which means "I have an image as input, please tell me what is written here". So how to achieve it?

1) Keep the original training code as it is, do the training normally;

2) After training, save this model Model(inputs=input_data, outputs=y_pred)in a .h5 file to be loaded wherever you want;

3) Do the prediction: if you take a look at the code, the input image is inverted and translated, so you can use this code to make it easy:

from scipy.misc import imread, imresize
#use width and height from your neural network here.

def load_for_nn(img_file):
    image = imread(img_file, flatten=True)
    image = imresize(image,(height, width))
    image = image.T

    images = np.ones((1,width,height)) #change 1 to any number of images you want to predict, here I just want to predict one
    images[0] = image
    images = images[:,:,:,np.newaxis]
    images /= 255

    return images

With the image loaded, let's do the prediction:

def predict_image(image_path): #insert the path of your image 
    image = load_for_nn(image_path) #load from the snippet code
    raw_word = model.predict(image) #do the prediction with the neural network
    final_word = decode_output(raw_word)[0] #the output of our neural network is only numbers. Use decode_output from image_ocr.py to get the desirable string.
    return final_word

This should be enough. From my experience, the images used in the training are not good enough to make good predictions, I will release a code using other datasets that improved my results later if necessary.

Answering related questions:

It is a technique used to improve sequence classification. The original paper proves it improves results on discovering what is said in audio. In this case it is a sequence of characters. The explanation is a bit trick but you can find a good one here.

  • Are there algorithms which reliably detect the rotation of a document?

I am not sure but you could take a look at Attention mechanism in neural networks. I don't have any good link now but I know it could be the case.

  • Are there algorithms which reliably detect lines / text blocks / tables / images (hence make a reasonable segmentation)? I guess edge detection with smoothing and line-wise histograms already works reasonably well for that?

OpenCV implements Maximally Stable Extremal Regions (known as MSER). I really like the results of this algorithm, it is fast and was good enough for me when I needed.

As I said before, I will release a code soon. I will edit the question with the repository when I do, but I believe the information here is enough to get the example running.

Syed Saad
  • 107
  • 2
  • 9
Claudio
  • 1,987
  • 3
  • 29
  • 55
3

Now I have a model.h5. What's next?

First I should comment that the model.h5 contains the weights of your network, if you wish to save the architecture of your network as well you should save it as a json like this example:

model_json = model_json = model.to_json()
with open("model_arch.json", "w") as json_file:
    json_file.write(model_json)

Now, once you have your model and its weights you can load them on demand by doing the following:

json_file = open('model_arch.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
# if you already have a loaded model and dont need to save start from here
loaded_model.load_weights("model.h5")    
# compile loaded model with certain specifications
sgd = SGD(lr=0.01)
loaded_model.compile(loss="binary_crossentropy", optimizer=sgd, metrics=["accuracy"])

Then, with that loaded_module you can proceed to predict the classification of certain input like this:

prediction = loaded_model.predict(some_input, batch_size=20, verbose=0)

Which will return the classification of that input.

About the Side Questions:

  1. CTC seems to be a term they are defining in the paper you refered, extracting from it says:

In what follows, we refer to the task of labelling un- segmented data sequences as temporal classification (Kadous, 2002), and to our use of RNNs for this pur- pose as connectionist temporal classification (CTC).

  1. To compensate the rotation of a document, images, or similar you could either generate more data from your current one by applying such transformations (take a look at this blog post that explains a way to do that ), or you could use a Convolutional Neural Network approach, which also is actually what that Keras example you are using does, as we can see from that git:

This example uses a convolutional stack followed by a recurrent stack and a CTC logloss function to perform optical character recognition of generated text images.

You can check this tutorial that is related to what you are doing and where they also explain more about Convolutional Neural Networks.

  1. Well this one is a broad question but to detect lines you could use the Hough Line Transform, or also Canny Edge Detection could be good options.

Edit: The error you are getting is because it is expected more parameters instead of 1, from the keras docs we can see:

predict(self, x, batch_size=32, verbose=0)

Raises ValueError: In case of mismatch between the provided input data and the model's expectations, or in case a stateful model receives a number of samples that is not a multiple of the batch size.

DarkCygnus
  • 7,420
  • 4
  • 36
  • 59
  • `AttributeError: 'Model' object has no attribute 'predict_classes'` – Martin Thoma Jul 05 '17 at 21:02
  • I see, it is because `predict_classes` is for [Sequential](https://keras.io/models/sequential/) models only, for non sequential you should use `predict()`... editing the question. – DarkCygnus Jul 05 '17 at 21:07
3

Here, you created a model that needs 4 inputs:

model = Model(inputs=[input_data, labels, input_length, label_length], outputs=loss_out)

Your predict attempt, on the other hand, is loading just an image.
Hence the message: The model expects 4 arrays, but only received one array

From your code, the necessary inputs are:

input_data = Input(name='the_input', shape=input_shape, dtype='float32')
labels = Input(name='the_labels', shape=[img_gen.absolute_max_string_len],dtype='float32')
input_length = Input(name='input_length', shape=[1], dtype='int64')
label_length = Input(name='label_length', shape=[1], dtype='int64')

The original code and your training work because they're using the TextImageGenerator. This generator cares to give you the four necessary inputs for the model.

So, what you have to do is to predict using the generator. As you have the fit_generator() method for training with the generator, you also have the predict_generator() method for predicting with the generator.


Now, for a complete answer and solution, I'd have to study your generator and see how it works (which would take me some time). But now you know what is to be done, you can probably figure it out.

You can either use the generator as it is, and predict probably a huge lot of data, or you can try to replicate a generator that will yield just one or a few images with the necessary labels, length and label length.

Or maybe, if possible, just create the 3 remaining arrays manually, but making sure they have the same shapes (except for the first, which is the batch size) as the generator outputs.

The one thing you must assert, though, is: have 4 arrays with the same shapes as the generator outputs, except for the first dimension.

Daniel Möller
  • 84,878
  • 18
  • 192
  • 214
  • I think this is what is confusing me. Should an OCR model not be able to output variable length sequences? What is the use of it if I have to know the length of the sequence before? – Martin Thoma Jul 07 '17 at 23:14
  • Well, this model was built this way. Take a look at what the generator is outputting and check if it really knows the length or if it just defines a maximum length. --- Unfortunately, Keras is very strict in terms of data size. You must have a data size to work. You probably should associate this with other types of algorithms for a good usage. Or create a model that is able to locate and count the letters. – Daniel Möller Jul 08 '17 at 01:48
  • 2
    What I imagine (I haven't studied your model) is that there should be a maximum length (just to define the size of the tensors in keras), and among the possible characters there is probably a null character, that will fill the blanks in case of a shorter sequence. (At least that sounds like a healthy model to me). – Daniel Möller Jul 08 '17 at 01:56
-1

Hi You can Look in to my github repo for the same. You need to train the model for type of images you want to do the ocr.

# USE GOOGLE COLAB
import matplotlib.pyplot as plt
import keras_ocr

images = [keras_ocr.tools.read("/content/sample_data/IMG_20200224_113657.jpg")] #Image path
pipeline = keras_ocr.pipeline.Pipeline()
prediction = pipeline.recognize(images)
x_max = 0
temp_str = ""
myfile = open("/content/sample_data/my_file.txt", "a+")#Text File Path to save text

for i in prediction[0]:
    x_max_local = i[1][:, 0].max()
    if x_max_local > x_max:
        x_max = x_max_local
        temp_str = temp_str + " " + i[0].ljust(15)
    else:
        x_max = 0
        temp_str = temp_str + "\n"
        myfile.write(temp_str)
        print(temp_str)
        temp_str = ""
myfile.close()    
Amir Imani
  • 3,118
  • 2
  • 22
  • 24