0

I am trying to implement a single neuron by using delta learning rule with a logistic activation function. My code is below.

import numpy as np
X = np.matrix('2; 4; 6; 8; 10; 15; 20; 25; 30; 40; 50; 60')
g = np.matrix('4.32323; 4.96276; 5.45565; 6.27151; 6.8552; 8.64987; 10.32581; 12.21393; 14.45659; 15.87602; 15.82488; 16.19419') 
norm_fac=16.19419
y =  [x / norm_fac for x in g]

class SingleNeuron (object):

    def __init__(self, eta=0.01, n_iter=10):
        self.eta=eta
        self.n_iter=n_iter

    def fit (self, X, y):
        self.w_ = np.zeros (X.shape[1]+1)
        self.cost_ = []

        for i in range (self.n_iter):
            output = self.net_input(X)
            errors = (y - output)
            self.w_[1:] += self.eta * X[0:].T.dot(errors)
            self.w_[0] += self.eta * errors.sum ()
            cost = (errors**2).sum() / 2.0
            self.cost_.append(cost)
        return self

      def net_input(self, X):
          return 1/(1+ np.exp (-(np.dot(X, self.w_[1]) + self.w_[0])))

      def predict(self, X):
          return self.net_input(X)

SN = SingleNeuron (eta = 0.1, n_iter = 10)
SN.fit (X, y)

However, when I run the code, I came across with the error : array_prepare must return an ndarray or subclass thereof which is otherwise identical to its input.

I am aware there is a question answered before (Numpy __array_prepare__ error), however it did not help me much. I appreciate any help greatly. Thank you

Community
  • 1
  • 1
Helin
  • 3
  • 2
  • Try to print `y` after the 4th line. Is this what you expected? – roadrunner66 Feb 26 '16 at 01:10
  • Thank you for your comment. What I am expecting is using y values to calculate error and then modifying ws until the error between y and output is minimized. – Helin Feb 26 '16 at 21:28
  • I meant the format of `y`. It doesn't return a list, but a list of matrices like: `[matrix([[ 0.26696179]]), matrix([[ 0.30645312]]), ...`. Is that what you wanted it to do? – roadrunner66 Feb 27 '16 at 06:46
  • ooohh..sorry for misunderstanding. No, it was not what I wanted to do and I corrected that part by changing 4th line y= g/ norm_fac. However, now I am getting a new error on line 29 (ValueError: non-broadcastable output operand with shape (1,) doesn't match the broadcast shape (1,1)). I am not sure if I need to post this as another question. Thank you for your help once more. – Helin Feb 28 '16 at 02:45
  • I'd try to troubleshoot it by myself for now by printing out the intermediate output, including looking at the type `print y, type(y)` etc. It is obviously required that the types on the left and right of `=` should be combatible. You can edit your question. The comments in this section are meant to refine a question until it is very well defined and the question & answer become useful for others. – roadrunner66 Feb 28 '16 at 02:53

1 Answers1

0

I have debugged your code and there are several errors:

1) Instead of using:

y =  [x / norm_fac for x in g]  

You can directly calculate y:

y_in = g_in / norm_fac    

This solves the error when you calculate y - output

2) Now, this line causes a problem:

self.w_[1:] += self.eta * X[0:].T.dot(errors)   

Since you want to access w_'s first element you have to use w_[1]. What you have used gets all elements of w_ starting from the 1th element.

Similarly X[0:] is unnecessary since it returns all elements of X. Just use X instead:

self.w_[1] += self.eta * X.T.dot(errors)

3) You shouldn't use

(errors\*\*2).sum()  

to calculate the sum of squared errors. errors**2 tries to multiply errors with itself and it gives an error since errors is a vector. Instead you have to use numpy.power to get the element wise powers:

np.power(errors, 2)  

Also for better practice:
1) Put the main code to the end and rename your variables. You have y as both a global variable (defined at the top) and as an input variable and this causes shadowing.

2) Define all class related variables in initialization.

3) Use lower case variable names.

4) You may use x * y instead of x.dot(y), its the same operation since you are using numpy.

5) Print some results at the end.

Considering these and following Python PEP8 formatting guidelines, I have changed your code as follows:

import numpy as np

class SingleNeuron (object):

    def __init__(self, eta=0.01, n_iter=10):
        self.eta = eta
        self.n_iter = n_iter
        self.w_ = []
        self.cost_ = []

    def fit(self, x, y):
        self.w_ = np.zeros(x.shape[1]+1)
        self.cost_ = []

        for i in range(self.n_iter):
            output = self.net_input(x)
            errors = (y - output)
            self.w_[1] += self.eta * x.T * errors
            self.w_[0] += self.eta * errors.sum()
            cost = np.power(errors, 2).sum() / 2
            self.cost_.append(cost)
        return self

    def net_input(self, x):
        return 1 / (1 + np.exp(-((x * self.w_[1]) + self.w_[0])))

    def predict(self, x):
        return self.net_input(x)


norm_fac = 16.19419
x_in = np.matrix('2; 4; 6; 8; 10; 15; 20; 25; 30; 40; 50; 60')
g_in = np.matrix('4.32323; 4.96276; 5.45565; 6.27151; 6.8552; 8.64987;     10.32581; 12.21393; 14.45659; 15.87602; '
             '15.82488; 16.19419')
y_in = g_in / norm_fac

SN = SingleNeuron(eta=0.1, n_iter=10)
SN = SN.fit(x_in, y_in)
print SN.w_
print SN.cost_

I'm not sure whether this code does what you want. You have to control the logic step by step.

P.S.: I recommend using PyCharm for developing in Python.

Doruk
  • 385
  • 2
  • 12