0

I have a ScaNN object and I am capable of loading data and building it. I can also save the model and extract it to my python environment, but I need to give a tf.Tensor variable as the input. If I want to call my model through RestAPI, I need to give the sentence itself or the embedded version of the input text, but it requires a tf.Tensor object. Is there a way to handle this within the model or by adding preprocessing layers so that I can just send an input text through the RestAPI and the model can handle the input?

txt = "I am a python developer and a Machine learning engineer"

txt_embedding = model.encode(txt) # model is a SentenceTransformer object (msmarco-distilbert-base-v4)
txt_embedding = np.array(txt_embedding)
#embeddings_tf = tf.data.Dataset.from_tensor_slices(txt_embedding)#.batch(32)
txt_embeddings_tf = tf.convert_to_tensor(txt_embedding)#.batch(32)  ```

0 Answers0