10

I would like to design a neural network for a multi-task deep learning task. Within the Keras API we can either use the "Sequential" or "Functional" approach to build such a neural network. Underneath I provide the code I used to build a network using both approaches to build a network with two outputs:

Sequential

seq_model = Sequential()
seq_model.add(LSTM(32, input_shape=(10,2)))
seq_model.add(Dense(8))
seq_model.add(Dense(2))
seq_model.summary()

Functional

input1 = Input(shape=(10,2))
lay1 = LSTM(32, input_shape=(10,2))(input1)
lay2 = Dense(8)(lay1)
out1 = Dense(1)(lay2)
out2 = Dense(1)(lay2)
func_model = Model(inputs=input1, outputs=[out1, out2])
func_model.summary()

When I look at both the summary outputs for the models, each of them contains identical number of trainable params:

Sequential and Functional .summary()

Up until now, this looks fine - however I start doubting myself when I plot both models (using keras.utils.plot_model) which results in the followings graphs: Sequential and Functional plot_model()

Personally I do not know how to interpret these. When using a multi-task learning approach, I want all neurons (in my case 8) of the layer before the output-layer to connect to both output neurons. For me this clearly shows in the Functional API (where I have two Dense(1) instances), but this is not very clear from the Sequential API. Nevertheless, the amount of trainable params is identical; suggesting that also the Sequential API the last layer is fully connected to both neurons in the Dense output layer.

Could anybody explain to me the differences between those two examples, or are those fully identical and result in the same neural network architecture? Also, which one would be preferred in this case?

Thank you a lot in advance.

wptmdoorn
  • 160
  • 1
  • 12

3 Answers3

11

The difference between Sequential and functional keras API:

The sequential API allows you to create models layer-by-layer for most problems. It is limited in that it does not allow you to create models that share layers or have multiple inputs or outputs.

the functional API allows you to create models that have a lot more flexibility as you can easily define models where layers connect to more than just the previous and next layers. In fact, you can connect layers to (literally) any other layer. As a result, creating complex networks such as siamese networks and residual networks become possible.

To answer your question:

No these APIs are not the same and the number of layers is normal that are the same number. Which one to use? It depends on the use you want to make of this network. What are you doing the training for? What do you want the output to be?

I recommend this link to make the most of the concept.

Sequential Models & Functional Models

I hope I helped you understand better.

Community
  • 1
  • 1
Zrufy
  • 423
  • 9
  • 22
  • Thank you for the help. But could you explain what is the difference between the hidden-output layer connection in both examples I mentioned? They share numerical amount of trainable params, suggesting that the weights/biases/connections are similar for both examples. – wptmdoorn Sep 25 '19 at 08:03
  • in the first example you have only one output dense.In the functional you have two output dense layers that you can concatenate whetever you want. – Zrufy Sep 25 '19 at 08:17
  • I understand, but in the first example I have a Dense output with 2 neurons, where in the second I have two Dense layers with one neuron - what is the theoretical difference between both examples in terms of neural network architecture? – wptmdoorn Sep 25 '19 at 08:23
  • the first take all features from lstm turn in vector and return output in this case binary.The second take all features from lstm turn in vector and pass in two dense layer with one output.The problem is i don't know the output you want from this network.But for example you can concatenate one output with other layer or another network.You can concatenate two input for two different network. – Zrufy Sep 25 '19 at 09:24
  • I want two outputs for the neural network, and currently both results provide me with two outputs? If I use back-propagation, both networks will get back-propagated with the two output values from the output layers? Both outputs are continuous numbers. – wptmdoorn Sep 25 '19 at 09:48
2

Both models are (in theory) equivalent, as the two output nodes do not have any interaction between them.

It is just that the required outputs have a different shape

[(batch_size,2)]

vs

[(batch_size,),(batch_size,)]

and thus, the loss will be different.

The total loss is averaged for the sequential model in this example, whereas it is summed up for the functional model with two outputs (at least with a default loss such as MSE).

Of course, you can also adapt the functional model to be exactly equivalent to the sequential model:

out1 = Dense(2)(lay2)
#out2 = Dense(1)(lay2)
func_model = Model(inputs=input1, outputs=out1)

Maybe you will also need some activations after the Dense layers.

Max
  • 193
  • 9
1

Both networks are functionally equivalent. Dense layers are fully connected by definition, which is considered to be the most basic and simple design that can be assumed for "normal" neural networks not otherwise specified. The exact learned parameters and behavior may vary slightly based on the implementation. The graph presented is ambiguous only because it does not show the connection of the neurons (which may number in the millions), but rather provides a symbolic representation of the connectivity with its name (Dense), in this case indicating a fully connected layer.

I expect that the sequential model (or equivalent functional model using one dense layer with two neurons as the output) would be faster because it can use a simplified optimization path, but I have not tested this and I have no knowledge of the compile time optimizations performed by Tensorflow.

11_22_33
  • 116
  • 1
  • 15