1

I am trying to implement a simple neural network for XOR function. The activation function I am using is Sigmoid function. The code for the sigmoid function is:

def ActivationFunction(a)
    e = 2.671                   # Sigmoid Function
    expo = e ** a
    val = expo / (1 + expo)
    return val

My problem is that this function is always returning a value between 0.7 and 0.8. This problem is showing a major effect in the output process.

Any suggestions would be appriciated.

cs95
  • 379,657
  • 97
  • 704
  • 746
pr22
  • 179
  • 1
  • 2
  • 9

1 Answers1

1

Your function is implemented correctly, however, the value of e is incorrect.

I'd recommend importing math and using the predefined e constant from there.

import math    
def sigmoid(x):
     return 1 / (1 + math.e ** -x)  # mathematically equivalent, but simpler

And, accordingly, the derivative:

def sigmoid_derivative(a):
    return a * (1 - a)

Where a is the hidden activation from the forward pass.

Besides this, I see nothing wrong with your implementation. So if you're still getting values you don't expect after the fix, the cause of the trouble lies elsewhere.

cs95
  • 379,657
  • 97
  • 704
  • 746
  • Changing the value is not making much of a difference, the output is still somewhere between 0.7 and 0.8 – pr22 May 10 '18 at 06:19
  • @PiyushRaut Well, it would depend on what you're passing to the function, right? – cs95 May 10 '18 at 06:20
  • According to my network, roughly I am passing values ranging from 0.7 to 1.5 – pr22 May 10 '18 at 06:24
  • @PiyushRaut In that case, it's not the function's fault that everything comes out to be between 0.7 and 0.8, don't you think? – cs95 May 10 '18 at 06:25
  • Yes I am working on that. I am currently new to neural networks – pr22 May 10 '18 at 06:27
  • It is common to "normalize" your input to a neural network when using a sigmoid activation function. Every column of input data is adjusted so that it has a mean of zero and a standard deviation of 1. – John Ladasky May 10 '18 at 08:00