0

I have a Trained Model with Keras and Tensorflow Backend (Keras 2.2.4 Tensorflow 1.13.1) and i want to use that Model in Visual Studio with ML.Net.

Therefore i converted my Model to ONNX with winmltools.convert_keras (I tired it with a Tensorflow 2.0 model but i got the No module named 'tensorflow.tools.graph_transforms' error). Now i finaly managed to load the model with:

string outName = "dense_6";
string inName = "conv2d_9_input";
string imgFolder = "path\to\Testimage
string pathToMLFile = "path\to\.onnx"

 var pipeline = mlContext.Transforms.LoadImages(outputColumnName: outName, imageFolder: imgFolder, inputColumnName: inName)
                    .Append(mlContext.Transforms.ResizeImages(outName, 299, 299, inName))
                    .Append(mlContext.Transforms.ExtractPixels(outputColumnName: outName, interleavePixelColors: true, offsetImage: 117))
                    .Append(mlContext.Transforms.ApplyOnnxModel(outputColumnName: outName, inputColumnName: inName, modelFile: pathToMLFile, fallbackToCpu: true));

But now i need a IDataView to "Fit" the model (from my understanding it is needed to Initialize it?) Therefore i load an empty IDataView with:

var data = mlContext.Data.LoadFromEnumerable(new List<ImageNetData>());

with ImageNetData being

    public class ImageNetData
    {
        [ColumnName("conv2d_9_input")]
        [ImageType(299, 299)]
        public Bitmap Image { get; set; }

        [ColumnName("dense_6")]
        public string Label;
    }

now i get the error:

Schema mismatch for input column 'conv2d_9_input': expected String, got Image<299, 299>
Parametername: inputSchema

My model with Netron:

Netron Model

Why does it want a String? And if i remove the [ImageType(299,299)] and change the Bitmap to string i get:

Schema mismatch for input column 'conv2d_9_input': expected Image, got String

I hope my Problem is understandable.

Update: After the hint from Gopal Vashishtha i changed the input/output names:

var pipeline = mlContext.Transforms.LoadImages(outputColumnName: "image_object", imageFolder: imgFolder, inputColumnName: inName)
                    .Append(mlContext.Transforms.ResizeImages(outputColumnName: "image_object_resized", imageWidth: 299, imageHeight: 299, inputColumnName: "image_object"))
                    .Append(mlContext.Transforms.ExtractPixels(outputColumnName: "input", inputColumnName: "image_object_resized", interleavePixelColors: true, offsetImage: 117, scaleImage: 1 / 255f))
                    .Append(mlContext.Transforms.ApplyOnnxModel(outputColumnName: outName, inputColumnName: "input", modelFile: pathToMLFile + "\\Converted.onnx", fallbackToCpu: true))

But i still have the same Error. Might there be a Problem with the way i trained the Model? I used a numpy-array as image and 0/1/2 for the corresponding Classes.

Community
  • 1
  • 1
Flo
  • 51
  • 8

1 Answers1

1

If you look at the samples, you can see that the output column in a chain of transforms needs to match the input column of the next estimator. I notice that in your example, you use the same input and output column name throughout.

gkv
  • 360
  • 1
  • 8