0

I am trying to ready Semiconductor wafer ID by using Tesseract OCR in Python, but it is not very successful, also, -c tessedit_char_whitelist=0123456789XL config doesn't work. Readout chip ID as: po4>1.

My OG image as my image before process

Part of my code as below:

# identify
optCode = pytesseract.image_to_string("c:/opencv/ID_fine_out22.jpg",lang="eng", config=' --psm 6 -c tessedit_char_whitelist=0123456789XL')
# print chip iD 
print("ChipID:", optCode)

Any ideas to improve the OCR? Also try to read the digits only.

I think about ML as one approach as well since I have large amount of sample images.

Mercury Platinum
  • 1,549
  • 1
  • 15
  • 28
Ablet
  • 1
  • 1
  • Please take your time to [format your post](https://stackoverflow.com/help/formatting) in a more readable manner. – Felix Mar 12 '19 at 15:45
  • HI Albert, I'm working at the same problem, tesseract can not recognize semi-ocr.font - how did you solve it. – DL-Newbie Apr 20 '23 at 20:42

1 Answers1

0

For myself I wrote some dirty script with pytesseract and few techniques from opencv library. You can choose different params here and view results. For example, I have image with name softserve.png:

softserve

Suppose you have ocr.py with following code:

# import the necessary packages
import argparse
import cv2
import numpy as np
import os
from PIL import Image
import pytesseract


# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
                help="path to input image to be OCR'd")
ap.add_argument("-b", "--blur", type=str, default=None,
                help="type of preprocessing to be done")
ap.add_argument("-t", "--thresh", type=str, default=None,
                help="type of preprocessing to be done")
ap.add_argument("-r", "--resize", type=float, default=1.0,
                help="type of preprocessing to be done")
ap.add_argument("-m", "--morph", type=str, default=None,
                help="type of preprocessing to be done")
args = vars(ap.parse_args())
# load the example image and convert it to grayscale
image = cv2.imread(args["image"])
# Resize to 2x
if args["resize"] != 1:
    image = cv2.resize(image, None,
                       fx=args["resize"], fy=args["resize"],
                       interpolation=cv2.INTER_CUBIC)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel = np.ones((5, 5), np.uint8)
# make a check to see if median blurring should be done to remove
# noise
if args["blur"] == "medianblur":
    gray = cv2.medianBlur(gray, 3)
if args["blur"] == "avgblur":
    gray = cv2.blur(gray, (5, 5))
if args["blur"] == "gaussblur":
    gray = cv2.GaussianBlur(gray, (5, 5), 0)
if args["blur"] == "medianblur":
    gray = cv2.medianBlur(gray, 3)
if args["blur"] == "filter":
    gray = cv2.bilateralFilter(gray, 9, 75, 75)
if args["blur"] == "filter2d":
    smoothed = cv2.filter2D(gray, -1, kernel)

# check to see if we should apply thresholding to preprocess the
# image
if args["thresh"] == "thresh":
    gray = cv2.threshold(gray, 0, 255,
                         cv2.THRESH_BINARY | cv2.THRESH_OTSU)[1]
if args["thresh"] == "thresh1":
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY)[1]
if args["thresh"] == "thresh2":
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV)[1]
if args["thresh"] == "thresh3":
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_TRUNC)[1]
if args["thresh"] == "thresh4":
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_TOZERO)[1]
if args["thresh"] == "thresh5":
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_TOZERO_INV)[1]
if args["thresh"] == "thresh6":
    gray = cv2.adaptiveThreshold(gray, 255,
                                 cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
                                 cv2.THRESH_BINARY, 115, 1)
if args["thresh"] == "thresh7":
    gray = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C,
                                 cv2.THRESH_BINARY, 115, 1)
if args["thresh"] == "thresh8":
    gray = cv2.adaptiveThreshold(gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
                                 cv2.THRESH_BINARY, 11, 2)
if args["thresh"] == "thresh9":
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]
if args["thresh"] == "thresh10":
    # gray = cv2.GaussianBlur(gray, (5, 5), 0)
    gray = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU)[1]

if args["morph"] == "erosion":
    gray = cv2.erode(gray, kernel, iterations=1)
if args["morph"] == "dilation":
    gray = cv2.dilate(gray, kernel, iterations=1)
if args["morph"] == "opening":
    gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, kernel)
if args["morph"] == "closing":
    gray = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, kernel)

# write the grayscale image to disk as a temporary file so we can
# apply OCR to it
filename = "{}.png".format(os.getpid())
cv2.imwrite(filename, gray)
# load the image as a PIL/Pillow image, apply OCR, and then delete
# the temporary file
text = pytesseract.image_to_string(Image.open(filename))
os.remove(filename)
print(text)
with open("output.py", "w") as text_file:
    text_file.write(text)

# show the output images
cv2.imshow("Image", image)
cv2.imshow("Output", gray)
cv2.waitKey(0)

If I simply use usual OCR without anything (such as pytesseract.image_tostring()):

python3 ocr.py --image softserve.png

I would got this text:

uray ['Amir', 'Barry', 'Chales', ‘Dao']

‘amir’ rss
tee)

print(2)

It's a very bad result, isn't it?

But after playing with resize and thresh you can got a more nice output:

python3 ocr.py --image softserve.png --thresh thresh6 --resize 2.675

And see in two opened windows how looks image before OCR:

two outputs

Output:

names1 = ['Amir', ‘Barry’, ‘Chales', ‘Dao']

if ‘amir' in names1:

@ print(1)
else: «=
@ print(2)

You also can apply morph and blur. You can read more about blur, thresholding and morphological transformations from opencv docs. I am hope, you will find that information useful in your work

Dmitriy Kisil
  • 2,858
  • 2
  • 16
  • 35