I Have successfully implemented predictionIO engine templates. I can deploy an engine with
$pio deploy -- --driver-memory xG
But How can I run recommendation (or any other) engine as a service
? I want to log all entries to a file specified for reference if some issues occur.
Also its mentioned that for small deployments
it is better not to use a distributed setup. I have a json formatted dataset for text classification template about 2MB in size and it requires about 8GB of memory for the engine to train and deploy. Does this fit into small deployment
category?