2

I just have a couple questions regarding neural networks and skip connections:

  1. When people say 3-layer NN, that means there is an input layer, a hidden layer, and an output layer right?

  2. I can't seem to find many resources/information on skip connections/layers online. What type of NN are they commonly used for? MLP? CNN? RNN?

  3. Also, is it possible to implement a skip connection with tensorflow and TF-slim? I checked the TF-slim library but it doesn't seem to contain skip layers as one of its included layers.

Thanks so much in advance!

dooder
  • 529
  • 1
  • 6
  • 14

1 Answers1

0
  1. Yes, a 3-layer NN is an NN with óne hidden layer.
  2. They aren't used for a specific type of neural network. You can basically implement them very easily on MLP's. But the term 'skip layer' is only actually applied to fully-connected layered networks. More abstract networks like LSTM (RNN) have neuron groups that feed-forward to only another specific neuron-group. But it's hard to call those 'skip layer' synapses as that is just the architecture. If the definition of 'skip layer' is that a neuron group feeds forwards to another neuron group that is not next in the activation line, then most architectures have 'skip layer' synapes (GRU, Hopfield, NARX).
  3. I'm not familiar with tensorflow, can't answer that :)
Thomas Wagenaar
  • 6,489
  • 5
  • 30
  • 73
  • Thank you! So, skip connections should theoretically improve the performance of MLP's with multiple layers right? Also, I've also heard of residual connections/layers - are they the same thing as skip connections? – dooder May 25 '17 at 00:04
  • @dooder depends what you define as 'performance'. Any function can be mapped with just a single layer. Adding feedforward skip connections adds more connections in total, requiring more computation (thus diminishing performance). However, as there are more connections, the network can learn patterns between different levels of abstractness: possibly finding useful patterns. I looked up residual connections, and from what I saw they are basically skip connections. – Thomas Wagenaar May 25 '17 at 16:40
  • Thank you! I don't think I understand what you mean by "any function can be mapped with just a single layer". Does that mean a single layer is enough to completely describe any function? – dooder May 25 '17 at 20:57
  • @dooder indeed, a single layer is enough to completely describe any given function http://neuralnetworksanddeeplearning.com/chap4.html – Thomas Wagenaar May 26 '17 at 14:21