Let's say I have a network model made in TensorFlow
/Keras
/Caffe
etc.
I can use CoreML Converters
API to get a CoreML model file (.mlmodel
) from it.
Now, as I have a .mlmodel
file, and know input shape
and output shape
, how can a maximum RAM footprint be estimated?
I know that a model сan have a lot of layers, their size can be much bigger than input/output shape.
So the questions are:
- Can be a maximal
mlmodel
memory footprint be known with some formula/API, without compiling and running an app? - Is a maximal footprint closer to a memory size of the biggest intermediate layer, or is it closer to a sum of the all layer's sizes?
Any advice is appreciated. As I am new to CoreML, you may give any feedback and I'll try to improve the question if needed.