3

I have a good understanding of the autograd algorithm, and I think I should learn about the source code in PyTorch. However, when I see the project on GitHub, I am confused by the structure, cuz so many files include autograd. So which part is the most important core code of autograd?

  • I've made an article about the code base here: http://blog.christianperone.com/2018/03/pytorch-internal-architecture-tour/ if you are interested. – Tarantula May 04 '18 at 13:50

3 Answers3

0
  1. Try to understand the autograd variable is probably the first thing, what you can do. From my understanding is autograd only a naming for the modules, which containing classes with enhancement of gradients and backward functions.

  2. Be aware a lot of the algorithm, e.g. back-prop through the graph, is hidden in compiled code.

  3. If you look into the __init__.py, you can get a glimpse about all important functions (backward & grad)

loose11
  • 607
  • 6
  • 31
0

I recommend you make associations between your understanding of autograd and the PyTorch datastructures involved by making a simple graph and printing/visualizing the structure as below:

enter image description here

ruoho ruotsi
  • 1,283
  • 14
  • 13
0

Reading the PyTorch code is doable, but you may be overwhelmed with details. To get a basic idea of autograd, you may want to refer to some simple autograd implementations, such as https://evcu.github.io/ml/autograd/ and https://medium.com/@ralphmao95/simple-autograd-implementation-understand-automatic-differentiation-hand-by-hand-9e86f6d703ab

Ralph Mao
  • 1
  • 1