1

I am struggling with tensorflow session.run() with hug in python. If I run session.run() without HUG for predicting, It is fine. But, if I run this on hug, it doesn't make any result (even any error).

Did anyone come across such scenario? Please help me.

My environments:

  • tensorflow version 1.2.1
  • hug 2.3.0
  • python version 3.5.2

1 Answers1

0

I don't know if this is the cause of your problem, but perhaps it might be related to the fact hug is a sever, and thus, it has some asynchronous code somewhere. Maybe what is happening is, because hug is trying to handle the request, it starts the session but doesn't wait for it to run.

Again, I don't know if this is the root cause or not, or even if this scenario even makes sense.

What I can suggest though, based on the little experience I had with Tensorflow, is to set up a different architecture.

If I understand correctly, what you are trying to do now is, you want to send a request to the hug api server, pick up the data from that request and feed it to a Tensorflow session. You want to wait for Tensorflow to predict something and return that prediction to the user that make the request.

I would approach this problem in a slightly different way. I would have the client establish a websockets connection. The client would use this connection to send data to the server. Upon receiving this data, the server would place it in a message queue. You can use a real message queue like RabbitMq or Kafka but to be honest, if you don't have a production quality app, you might as well just use redis or even mongo as a message queue. The messages in this queue would be picked up by a worker process, one running Tensoflow and would use the data in the messages to perform a prediction. Upon performing the prediction, the data would be placed in a different queue and be picked up by the server. The server would then return through the websocket the prediction to the client.

The major thing you solve with this approach is separating the api server from the Tensorflow worker. This will allow you to debug and test each part independently. More over, you will not be clogging your server by waiting for Tensorflow. The server will essentially act as a scheduler and the worker will work. When it finishes, it just returns the result.

I hope this will help you and sorry again if my suggestion regarding the possible root cause is wrong.

mayk93
  • 1,489
  • 3
  • 17
  • 31