In many (or most) Deeplearning4j example I have seen, when building a configuration method calls seem to be added to method calls . . .
For example:
MultiLayerConfiguration conf = new NeuralNetConfiguration.Builder()
.seed(rngSeed) //include a random seed for reproducibility
// use stochastic gradient descent as an optimization algorithm
.updater(new Nesterovs(0.006, 0.9))
.l2(1e-4)
.list()
.layer(0, new DenseLayer.Builder() //create the first, input layer with xavier initialization
.nIn(numRows * numColumns)
.nOut(1000)
.activation(Activation.RELU)
.weightInit(WeightInit.XAVIER)
.build())
.layer(1, new OutputLayer.Builder(LossFunction.NEGATIVELOGLIKELIHOOD) //create hidden layer
.nIn(1000)
.nOut(outputNum)
.activation(Activation.SOFTMAX)
.weightInit(WeightInit.XAVIER)
.build())
.pretrain(false).backprop(true) //use backpropagation to adjust weights
.build();
Occassionally I see examples where thh conf variable is created in one statement, and then each of the other operations are created in separate statements. Is there any benefit in doing it the first way? It does tend to obfuscate when a particular method call returns a different type of object. Also, it would seem that the second approach would be more amenable to working with Jshell.