This is my nn.
and here is the code:
import numpy as np
def relu(x):
x = np.where(x>=0,x,x*.1)
return x
def deriv_relu(x):
x = np.where(x>=0,1,x*.1)
return x
def bias(x):
e = np.ones((x.shape[0],1))
e1 = np.hstack((e,x))
return e1
X = np.array([[1,1],[1,0],[0,0]]) #3,2
y = np.array([[1,1,0]]).T #3,1
w0 = np.random.random((2,5)) #2,5
w1 = np.random.random((6,1)) #6,1
for i in range(4):
l0 = X #3,2
h1 = relu(l0.dot(w0)) #3,5
h1_bias = bias(h1) #3,6
l1 = relu(h1_bias.dot(w1)) #3,1
error = y-l1 #3,1
delta2 = error*deriv_relu(l1) #3,1
error2 = delta2.dot(w1.T) #3,6
delta = error2*deriv_relu(h1_bias)#3,6
w1+=h1_bias.T.dot(delta2) #6,1
w0+=l0.T.dot(delta) # w0 is shape 2,5, because of added bias i am dead. Any help?
for a,b in zip(l1,y):
print(a,b)
The problem I am facing is, I added a bias neuron in hidden layer 1. When I start with backpropagation and matrix multiplications, I get in problem, because the dimensions are of course not aligned, which is of course of added bias. I am thinking how to overcome this issue. Is there a way I could do this within my code?