0

I'm trying to pass multiple inputs to a ComputationGraph, but keep getting the error "cannot do forward pass: inputs not set". What is my mistake here?

        ArrayList<String> inAndOutNames = new ArrayList<>();
        String[] inputNames = new String[inputAmount];
        InputType[] inputTypes = new InputType[inputAmount + 1];
        for(int i = 0; i < inputAmount; i++)
        {
            inAndOutNames.add("bit" + i);
            inputNames[i] = "bit" + i;
            inputTypes[i] = InputType.recurrent(1);
        }
        inAndOutNames.add("p");
        inputTypes[inputAmount] = InputType.recurrent(1);

        ComputationGraphConfiguration configuration = new NeuralNetConfiguration.Builder()
                .weightInit(WeightInit.XAVIER)
                .updater(new Adam(0.001))
                .seed(seed)
                .graphBuilder()
                .addInputs(inAndOutNames)
                .setInputTypes(inputTypes)
                .addLayer("l1", new LSTM.Builder().nIn(inputAmount).nOut(128).activation(Activation.TANH).build(), inputNames)
                .addLayer("l2", new LSTM.Builder().nIn(128).nOut(256).build(), "l1", "p")
                .addLayer("lOut", new DenseLayer.Builder().nIn(256).nOut(10).build(), "l2")
                .setOutputs("lOut")
                .build();

        model = new ComputationGraph(configuration);

        model.init();
        model.setListeners(new ScoreIterationListener(iterationsBetweenScores));

I already tried several different combinations of layers and vertices, but could not yet find anything that worked.

Fi0x
  • 3
  • 2
  • Could you clarify what you're trying to do? LSTMs only have 1 input. It looks like you're trying to pass in more than 1 for each layer for some reason. – Adam Gibson Mar 23 '23 at 21:12
  • I need to reconstruct a neural network from this paper: https://journals.sdu.edu.kz/index.php/ais/article/view/410 I have 32-bit numbers with single bits as individual inputs for the network. The output of the network should be the smallest prime factor of this number. Which layers could I use instead to accept multiple inputs? – Fi0x Mar 24 '23 at 11:56
  • This might be a better discussion over on our forums: https://community.konduit.ai/. We'll need to dive in to the paper and look at the architecture. Stackoverflow is for more single answer type solutions. Usually though you have to concat or merge your layers at some point in order to build a valid network anyways. Please ask over on our forums. I think I technically solved your issue with the invalid layers. – Adam Gibson Mar 24 '23 at 13:23

1 Answers1

0

You're setting more than 1 input on an LSTM. In order to use multiple inputs in a neural network you need to actually assemble it to do that. That's true for any framework. No framework allows you to just arbitrarily add inputs to any arbitrary layer. If a layer accepts 1 input it only accepts 1 input.

Your network can accept any number of inputs but at some point it needs to converge to 1 path in order for most things to work. You can do this with the concat layer.

An example of our MergeVertex here:

ComputationGraphConfiguration conf = new NeuralNetConfiguration.Builder()
        .updater(new Sgd(0.01))
    .graphBuilder()
    .addInputs("input1", "input2")
    .addLayer("L1", new DenseLayer.Builder().nIn(3).nOut(4).build(), "input1")
    .addLayer("L2", new DenseLayer.Builder().nIn(3).nOut(4).build(), "input2")
    .addVertex("merge", new MergeVertex(), "L1", "L2")
    .addLayer("out", new OutputLayer.Builder().nIn(4+4).nOut(3).build(), "merge")
    .setOutputs("out")
    .build();
Adam Gibson
  • 3,055
  • 1
  • 10
  • 12