0

I'm developping AI models under python and on-boarding them to Acumos. I'm training models with a training and testing dataset in local before on-boarding the trained model to Acumos Marketplace.

Considering documentation here about datasource in ML Workbench and ML Workbench in general, it's possible to associate a model and a dataset and train this model in a pipeline in the platorfm directly.

But according to this tutorial, the dataset is in local only. I did'nt find a tutorial on how to make a model for the ML Workbench pipeline.

My question is : How I should or could develop my model in order to fit in the ML Workbench pipeline and be trained in the platform with a datasource and not a local dataset ? Have you got some tutorials or example ?

Update

For now I have an open_data function that open a csv file from my machine when I train and export the model.

my_path = "/home/ninjadev/model-v1/source/data/"

def open_data(filename):
    df = pd.read_csv(my_path + filename, sep=';')
    df.head()
    return df

Then I train my classifier :

clf = RandomForestClassifier(random_state=0).fit(X_train, y_train)
    Y_pred = clf.predict(X_test)
    accuracy = str(accuracy_score(y_test, Y_pred))
    print(clf.classes_)
    print("Training accuracy : "+accuracy)

Then with an AcumosSession, I export the model in local, then I upload it on my Acumos platform.

So my question is not on a specific code line but in general : How can I remove this open function in order to work with datasource from Acumos platform ?

Thanks for your help,

Benjamin B

0 Answers0