-1
{"errorMessage": "/var/task/nvidia/cufft/lib/libcufft.so.10: failed to map segment 
from shared object", "errorType": "OSError", "requestId": 

I am not sure how to fix this. I am trying to import detoxify and deploy it via a container on Amazon Web Services (AWS). The library has GPU bound assets and causes this issue according to me. How do I workaround this on lambda?

I have tried changing models but, the imports stick, causing the issue. I was thinking about either using sagemaker or trying to get anaconda to work around by installing nvidea based packages.

desertnaut
  • 57,590
  • 26
  • 140
  • 166

1 Answers1

-2

The error message you are getting is because the libcufft.so.10 library is a GPU-bound library, and Lambda functions do not have access to GPUs.

To workaround this, you can try one of the following options:

  • Use a different library that does not require a GPU. There are a number of libraries that can be used for natural language processing, and not all of them require a GPU. You can try searching for a library that is specifically designed for use on Lambda functions.
  • Use SageMaker. SageMaker is a managed service from AWS that provides access to GPUs. You can use SageMaker to deploy your detoxify model to a GPU instance.
  • Use a different deployment method. You can also try deploying your detoxify model to a different platform, such as EC2 or ECS. These platforms do have access to GPUs, so you will be able to use the libcufft.so.10 library.

If you choose to use a different library, you will need to make sure that the library is compatible with your detoxify model. You can check the documentation for the library to see if it is compatible.

If you choose to use SageMaker, you will need to create a SageMaker instance and then deploy your detoxify model to the instance. You can follow the instructions in the SageMaker documentation: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works.html to create a SageMaker instance and deploy your model.

If you choose to deploy your model to a different platform, you will need to follow the instructions for that platform to deploy your model.

connecttopawan
  • 208
  • 3
  • 14