0

I'm building a feed forward neural network, and I'm trying to decide how to implement the bias. I'm not sure about two things:

1) Is there any downside to implementing the bias as a trait of the node as opposed to a dummy input+weight?

2) If I implement it as a dummy input, would it be input just in the first layer (from the input to the hidden layer), or would I need a dummy input in every layer?

Thanks!

P.S. I'm currently using 2d arrays to represent weights between layers. Any ideas for other implementation structures? This isn't my main question, just looking for food for thought.

Nathan
  • 73,987
  • 14
  • 40
  • 69
  • The bias is required at least in the output layer because this is where classification and regression actually happen. All other layers only have the purpose to generate good features. – alfa May 18 '13 at 18:28
  • Put it everywhere. If you have X=0, and hidden_layer = 0, you will receive error 0 and weights always be 0 for this neuron till the end. – Makaroniiii Aug 31 '17 at 18:59

1 Answers1

1
  1. Implementation doesn't matter as long as the behaviour is right.

  2. Yes, it is needed in every layer.

  3. 2d array is a way to go.

I'd suggest to include bias as another neuron with constant input 1. This will make it easier to implement - you don't need a special variable for it or anything like that.

sashkello
  • 17,306
  • 24
  • 81
  • 109
  • Is it really just an extra 1 in the array? `x = np.array([[0,1,2,3,4,5]])` `x_with_bias = np.array([[0,1,2,3,4,5,1]])` ? – alec_djinn Dec 12 '18 at 12:57