0

Is there any MATLAB transfer(activation) function which its threshold could be set in desired value? (which mean we could for example set its threshold to a value, so if sum of weighted inputs was greater than a, neuron will fire and in other cases, don't fire).

I've searched and finds things like: hardlim(N,FP) and satlin(N,FP) but threshold is not custom. Also using of bias is not much useful because each neuron in every layer, has different threshold.

Thanks.

Shai
  • 111,146
  • 38
  • 238
  • 371
Jewen
  • 1
  • 2
  • Why not just use a layer beforehand to scale the values? – alkasm Jun 18 '17 at 09:00
  • I couldn't clearly figure out what you mean by "scale the values" @Alexander Reynolds, but I think that will be need too much more calculation for that layer. Anyway, I have one algorithm which suggest me an ANN structure for one dataset, and I want to compare this ANN by one another. So I can not change the structure. Can you help me by telling how can I find (or even code and build) this type of activation function? – Jewen Jun 18 '17 at 12:44
  • I guess I don't really understand what you're trying to achieve. The actual value of the threshold doesn't matter---if the threshold value is *a* instead of 1 then it's identical to multiplying all of your weights by *a*. Your weights would simply shift to accommodate the different value required for the threshold. – alkasm Jun 18 '17 at 13:50
  • I want these neurons to act like Boolean function. When sum of weighted inputs are greater than special value `a`, neuron's output be 1 and otherwise, 0. I think this is not hard work, but I don't know MATLAB's proper function. Can you help me with this @Alexander Reynolds ? – Jewen Jun 18 '17 at 21:56
  • I totally understand what you want, but I'm not sure that you understand how activation functions work in a network. There is simply no need for a function like this. If the sum of weighted inputs is greater than 1, but you want greater than `a`, then this is equivalent to the weightings being divided by `a` with a threshold of 1. The whole point of a NN is to *set these weights for you*. And scaling an activation function is identical to just scaling the weights---which the NN *will* scale on it's own. So scaling the activation function is not necessary at all. – alkasm Jun 19 '17 at 15:32
  • I thought about what you said and I get the idea you mean. You're right, I didn't see the problem from this point of view and this was because of my constant structure from an algorithm. I am trying to compare this structure with another ones. Now I think I can use Hard-limit transfer function with +1 bias and divide all weights by threshold @AlexanderReynolds . – Jewen Jun 20 '17 at 19:45
  • Speaking of bias, adding a bias term to your weights (i.e. `Wx+b`) would achieve the same effect. – alkasm Jun 20 '17 at 19:56

0 Answers0