I already write some python or matlab code for neural network, but not using any framework or auto differentiation, but as we know, Theano and TensorFlow using auto differentiation, you build a calculation graph, they do calcultaion(back propagation) for you, but some times written program can run but definitely not run as I wish, So i wonder have some methods to make sure my program is correct? printing the constructed calculation graph? but seems complicated when the number of NN layers is big like the winner of Imagenet adopted 152 layers Or write another program using simple matlab or python code, then compare this output with program using framework?
Asked
Active
Viewed 227 times
1
-
This question may be what you are looking for: http://stackoverflow.com/questions/33802336/visualizing-output-of-convolutional-layer-in-tensorflow – GavinBrelstaff Feb 19 '16 at 10:53
1 Answers
3
Standard solution is numerical gradient checking. You can inefficiently compute gradient by doing forward propagation at two nearby values.
See section on numerical gradient checking here: https://web.stanford.edu/class/cs294a/sparseAutoencoder_2011new.pdf
In TensorFlow this is implemented using compute_numeric_jacobian here.

Yaroslav Bulatov
- 57,332
- 22
- 139
- 197
-
Hi, how can use the numerical gradient checking for tensorflow or theano, do you have some example code? I see they use numerical gradient checking for inplementating a single op to make sure the op is correct. – Xiuyi Yang Mar 04 '16 at 07:13
-
Here is the Theano doc on how to test the grad. It use the technique Yaroslav used: http://deeplearning.net/software/theano/extending/extending_theano.html#testing-the-gradient – nouiz Apr 15 '16 at 21:06