I am working on an ML usecase in which I have to make a chat toxicity classifier. I have made the text classification model. Now I am trying to build and run it using Docker. I am using Flask and tensorflow model.
DockerFile
FROM python:3.8
# copy the dependencies file to the working directory
COPY . /app
WORKDIR /app
RUN pip3 install --upgrade pip
# install dependencies
RUN pip install -r requirements.txt
# In order to access the app in container we need port. so we use expose it as port value
EXPOSE 5000
# -bind is used to bind the port to the app and workers is used to handle the request and response of the app
CMD python app.py
app.py
import pandas as pd
import numpy as np
from flask import Flask,jsonify, request, render_template
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.models import load_model
from tensorflow.keras.layers import TextVectorization
import json
import nltk
from Support import TextPreprocessing
nltk.download('stopwords')
nltk.download('punkt')
nltk.download('wordnet')
class_labels = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
# load model
TextClassifier = load_model('trial1.h5')
# Load the vectorizer configuration
with open('vectorizer_config.json', 'r') as f:
vectorizer_config = json.load(f)
# Create a new vectorizer using the loaded configuration
vectorizer = TextVectorization.from_config(vectorizer_config)
# Load the vocabulary
vocabulary = []
with open('vocabulary.txt', 'r',encoding="utf8") as f:
for line in f:
word = line.strip()
vocabulary.append(word)
# Adapt the vectorizer to the loaded vocabulary
vectorizer.set_vocabulary(vocabulary)
# Create a preprocessor object
preprocessor = TextPreprocessing()
def Get_prediction(text):
user_input = preprocessor.preprocess_text(text)
user_input=' '.join(user_input)
# print(user_input)
vectorized_text = vectorizer(user_input)
# print(vectorized_text)
prediction = TextClassifier.predict(np.expand_dims(vectorized_text,0))
# Convert the prediction probabilities to binary form
binary_predictions = np.where(prediction > 0.5, 1, 0)
predicted_classes = [class_labels[i] for i, pred in enumerate(binary_predictions[0]) if pred == 1]
return predicted_classes
# create flask app
app = Flask(__name__)
@app.route('/')
def home():
return render_template('index.html')
@app.route('/predict_api', methods=['POST'])
def predict_api():
"""
Endpoint for rendering results in JSON format
"""
text = request.form['text'] # Get the 'text' field from the form data
# Perform prediction on the text
prediction = Get_prediction(text)
if len(prediction) == 0:
prediction = 'Not Toxic'
else:
prediction = ', '.join(prediction)
response = {'prediction': prediction} # Create a response dictionary
return jsonify(response) # Return the response as JSON
@app.route('/predict', methods=['POST'])
def predict():
"""
For rendering results on HTML GUI
request.json is a dictionary object with
key as 'text' and value as the text entered
by the user in the text box
"""
text = request.form['text']
prediction = Get_prediction(text)
if len(prediction) == 0:
prediction = 'Not Toxic'
else:
prediction =' ,'.join(prediction)
return render_template('index.html', prediction_text='The comment is {}.'.format(prediction))
if __name__ == '__main__':
app.run(debug=True)
ERROR DESCRIPTION:
PS F:\IMP_DOCUMENT\Techdome\Chat-Toxicity-Analyser> docker run -p 8080:8080 chat_toxicity-api
2023-06-20 19:02:17.221538: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-06-20 19:02:17.338429: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-06-20 19:02:17.964623: I tensorflow/tsl/cuda/cudart_stub.cc:28] Could not find cuda drivers on your machine, GPU will not be used.
2023-06-20 19:02:17.969857: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-06-20 19:02:21.215821: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
2023-06-20 19:02:40.484471: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_2_grad/concat/split_2/split_dim' with dtype int32
[[{{node gradients/split_2_grad/concat/split_2/split_dim}}]]
2023-06-20 19:02:40.486301: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_grad/concat/split/split_dim' with dtype int32
[[{{node gradients/split_grad/concat/split/split_dim}}]]
2023-06-20 19:02:40.488377: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_1_grad/concat/split_1/split_dim' with dtype int32
[[{{node gradients/split_1_grad/concat/split_1/split_dim}}]]
2023-06-20 19:02:40.653072: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/ReverseV2_grad/ReverseV2/ReverseV2/axis' with dtype int32 and shape [1]
[[{{node gradients/ReverseV2_grad/ReverseV2/ReverseV2/axis}}]]
2023-06-20 19:02:40.700996: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_2_grad/concat/split_2/split_dim' with dtype int32
[[{{node gradients/split_2_grad/concat/split_2/split_dim}}]]
2023-06-20 19:02:40.704714: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_grad/concat/split/split_dim' with dtype int32
[[{{node gradients/split_grad/concat/split/split_dim}}]]
2023-06-20 19:02:40.707670: I tensorflow/core/common_runtime/executor.cc:1197] [/device:CPU:0] (DEBUG INFO) Executor start aborting (this does not indicate an error and you can ignore this message): INVALID_ARGUMENT: You must feed a value for placeholder tensor 'gradients/split_1_grad/concat/split_1/split_dim' with dtype int32
[[{{node gradients/split_1_grad/concat/split_1/split_dim}}]]
Killed
Problem Description:
I have made the DockerFile. I have also successfully run docker build command and it executed without error. But when I use docker run commend to use the ML Flask app using docker on local machine it giving me some warning errors and suddenly the terminal show Killed message as shown in Error Description. I request if anyone knows how to fix this please help.
I was expecting successful execution and getting an port where my flask app would run.