I'm confused about how to upscale a sigmoid result the right way. for example the input for my NN is between 0,10. I scale this to be between -4,4 as the active input range for sigmoid and i get a result of lets say 0.83201. Now i want to rescale this back to between 0,10.
I thought the inverse of sigmoid was logit but funny stuff happens when i use this:
float u = sig.LogSigmoid(sig.InputScaler(3,0f,10f,-4f,4f));
Debug.Log(-Mathf.Log(u/(1-u)));
results in: 1.6. while
float u = sig.LogSigmoid(sig.InputScaler(4,0f,10f,-4f,4f));
Debug.Log(-Mathf.Log(u/(1-u)));
results in: 0.8.
EDIT: Ok after some fiddling, i found that the Logit does work only it returns my scaled input :-). so for sigmoid + downscaling:
float u = sig.LogSigmoid(sig.InputScaler(6,0f,10f,-4f,4f));
the following logit + upscaling worked perfect:
Debug.Log(sig.InputScaler(-Mathf.Log((1-u)/u),-4f,4f,0f,10f));
InputScaler being:
public float InputScaler(float x, float minFrom, float maxFrom, float minTo, float maxTo)
{
float t = (((x-minFrom)*(maxTo-minTo))/(maxFrom-minFrom))+minTo;
return t;
}