0

I have been trying to build a CNN model using dl4j, but it is giving me an error. The code:

RecordReader rr;
rr = new CSVRecordReader();
rr.initialize(new FileSplit(new File(dataLocalPath, nameTrain)));    
DataSetIterator trainIter = new RecordReaderDataSetIterator(rr, batchSize, 0, 2);    
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder().seed(123).updater(new Adam(0.1))
                            .list()
                            .layer(0,new Convolution1DLayer.Builder().kernelSize(3).padding(1).nIn(371).nOut(64).build())
                            .layer(1,new Subsampling1DLayer.Builder().kernelSize(3).padding(1).build())
                            .layer(2,new Convolution1DLayer.Builder().kernelSize(3).activation(Activation.RELU).padding(1).nIn(64)
                                    .nOut(32).build())
                            .layer(3,new Subsampling1DLayer.Builder().kernelSize(3).padding(1).build())
                            .layer(4,new DenseLayer.Builder().activation(Activation.RELU).nIn(32).nOut(16).build())
                            .layer(5,new OutputLayer.Builder(LossFunction.RECONSTRUCTION_CROSSENTROPY).activation(Activation.SIGMOID)
                                    .nIn(16).nOut(2).build())
                            .build();
MultiLayerNetwork model = new MultiLayerNetwork(conf);
model.init();
model.setListeners(new ScoreIterationListener(10));
final DataSet trainData = trainIter.next();
INDArray a = trainData.getFeatures();
final INDArray b = trainData.getLabels();
a = a.reshape(new int[] { (int) a.size(0), (int) a.size(1), 1 });
model.fit(a, b);

Added the error below,

Exception in thread "main" org.deeplearning4j.exception.DL4JInvalidInputException: Input that is not a matrix; expected matrix (rank 2), got rank 3 array with shape [128, 32, 1]. Missing preprocessor or wrong input type? (layer name: layer4, layer index: 4, layer type: DenseLayer)
    at org.deeplearning4j.nn.layers.BaseLayer.preOutputWithPreNorm(BaseLayer.java:306)
    at org.deeplearning4j.nn.layers.BaseLayer.preOutput(BaseLayer.java:289)
    at org.deeplearning4j.nn.layers.BaseLayer.activate(BaseLayer.java:337)
    at org.deeplearning4j.nn.layers.AbstractLayer.activate(AbstractLayer.java:257)
    at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.ffToLayerActivationsInWs(MultiLayerNetwork.java:1129)
    at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.computeGradientAndScore(MultiLayerNetwork.java:2741)
    at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.computeGradientAndScore(MultiLayerNetwork.java:2699)
    at org.deeplearning4j.optimize.solvers.BaseOptimizer.gradientAndScore(BaseOptimizer.java:170)
    at org.deeplearning4j.optimize.solvers.StochasticGradientDescent.optimize(StochasticGradientDescent.java:63)
    at org.deeplearning4j.optimize.Solver.optimize(Solver.java:52)
    at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fitHelper(MultiLayerNetwork.java:2303)
    at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:2261)
    at org.deeplearning4j.nn.multilayer.MultiLayerNetwork.fit(MultiLayerNetwork.java:2248)
    at com.rssoftware.efrm.AnnModelFromKeras.trainModel(AnnModelFromKeras.java:73)
    at com.rssoftware.efrm.AnnModelFromKeras.main(AnnModelFromKeras.java:89)

I have tried using input pre-processor, CNN to feed forward pre processor, but it is not working.

einpoklum
  • 118,144
  • 57
  • 340
  • 684
Ritam
  • 11
  • 2

1 Answers1

0

The expected input shape into a conv1d layer is [minibatchSize, convNIn, length] or [minibatchSize, featuresSize, sequenceLength] in terms of a time series. The reshape in your code sets your length to 1. Maybe you intended to set featuresize/convNIn to 1?

Susan Eraly
  • 339
  • 1
  • 7
  • The problem occurs at subsampling layer output to dense layer input. `Input that is not a matrix; expected matrix (rank 2), got rank 3 array with shape [128, 32, 1]. Missing preprocessor or wrong input type? (layer name: layer4, layer index: 4, layer type: DenseLayer)`. Am I missing something? – Ritam Dec 24 '19 at 05:29