I've been using IBM CPLEX Optimization Studio to run Decision Optimization (DOCPLEX) models locally with python API. I have a bulk of temporal data (hourly time steps for 20 years) and I define and solve optimization problems for 24h windows. I can track the progress by incrementing a counter (e.g. tqdm
) every time a new model is created.
Now I'm using IBM Watson Machine Learning with python API to deploy these models. I store both the DOCPLEX model and the window-splitting logic on the IBM Cloud. I'd like to keep it this way because I'd rather send 20x365x24 rows of data as a single job than make as many API calls (each model solves in much less time than it takes to post a job and retrieve results). I'd still like to track the progress but the only useful information in job_details
is job status (e.g. queued/completed/failed).
Is there a way to pass additional information, such as model number, to job_details
from within the model? Or retrieve the logs while the job is running?
I read the API documentation and went through the notebook examples provided by IBM but failed to find such a use case.