1

I am trying to replicate Machine Learning Inference using AWS Lambda and Amazon EFS. I was able to deploy the project, however, it was not possible to infer the machine learning model, because it was not found. I access CloudWatch and get the following output:

[ERROR] FileNotFoundError: Missing /mnt/ml/models/craft_mlt_25k.pth and downloads disabled
Traceback (most recent call last):
  File "/var/task/app.py", line 23, in lambda_handler
    model_cache[languages_key] = easyocr.Reader(language_list, model_storage_directory=model_dir, user_network_directory=network_dir, gpu=False, download_enabled=False)
  File "/var/lang/lib/python3.8/site-packages/easyocr/easyocr.py", line 88, in __init__
    detector_path = self.getDetectorPath(detect_network)
  File "/var/lang/lib/python3.8/site-packages/easyocr/easyocr.py", line 246, in getDetectorPath
    raise FileNotFoundError("Missing %s and downloads disabled" % detector_path)

Then, I noticed that not even the directory that was supposed to store the models was created in the S3 Bucket.

At Dockerfile has the following command: RUN mkdir -p /mnt/ml, But in my s3 bucket, this directory does not exist.

S3 Bucket

It is possible create the directories and upload the EasyOCR model manually? If I do, will I have to modify the original code?

Vanessa
  • 62
  • 7

0 Answers0