I've machine learning model in production that has its predictions being used to make a Azure ML File Dataset. The Dataset is compose by 94 files and has the size of
8,618 MiB. I'm using a compute instance of the time
STANDARD_E4S_V3`
and trying to get the Dataset with the following python code.
from azureml.core import Workspace, Dataset
ws = Workspace.from_config()
dataset = Dataset.get_by_name(ws, name='features_for_predictions_modelo_ativacao')
df = dataset.to_pandas_dataframe()
I have already past almost 10 min and the dataset was not even stored as python variable. Is this happen because my df is to large or because my compute instance is not that strong?