1

I Have successfully implemented predictionIO engine templates. I can deploy an engine with

$pio deploy -- --driver-memory xG

But How can I run recommendation (or any other) engine as a service? I want to log all entries to a file specified for reference if some issues occur.

Also its mentioned that for small deployments it is better not to use a distributed setup. I have a json formatted dataset for text classification template about 2MB in size and it requires about 8GB of memory for the engine to train and deploy. Does this fit into small deployment category?

cutteeth
  • 2,148
  • 3
  • 25
  • 45

1 Answers1

0

pio deploy Will run the engine as a service. Logging goes to pio.log and you could add your own custom logging. I'd say so long as you have more than 8GB in RAM to spare, stick to the single machine aka "small deployment".