The TensorFlow white paper mentions that Eigen is used. Are there public explanations for how Eigen was chosen, and are they motivation for using Eigen in TensorFlow C++ op kernels?
Asked
Active
Viewed 1.2k times
20
-
2Armadillo is also header only. – dani Jan 08 '17 at 10:24
-
2tensorflow uses the Tensor module of Eigen (which is mostly maintained by the main author of tensorflow). I don't have any experience with armadillo, nor do I know why he chose Eigen. I do know that he once asked if it was possible to integrate tensorflow as a module of Eigen as well (which we rejected, since it goes quite out of the scope of Eigen). – chtz Jan 08 '17 at 15:53
-
hi dani - i've thought of amadillo as headers only in the past, and did use it that way for many years. it was ok but no matrix invert without installing blas, openblas etc. for a project in 2014 i added eigen just to do matrix invert headers only - an odd situation. recent armadillo versions seem to move away from mentioning headers only and simply go with library install, with openblas etc. – Noah Smith Jan 08 '17 at 17:31
-
hi chtz - that definitely makes sense to me, and fits with the brief comment in the whitepaper. i've integrated eigen in my current project, side by side with armadillo, and will definitely report here on impressions. as an old school blas, lapack, etc guy - eigen with tensorflow seems to have the feel of the future. in other words, i have the feeling my project will be using eigen within tensorflow ops... i'll update this discussion soon. – Noah Smith Jan 08 '17 at 17:39
-
1TensorFlow uses CUDA since it's faster, same TF Eigen op implementation can run both on CPU and GPU. From docs it looks like Armadillo is OpenCL only – Yaroslav Bulatov Jan 08 '17 at 19:15
-
hi yaroslav - interesting, let's see if i understand - cuda is associated with nvidia, and they're driving the hardware side of things. so for cpp tf op kernel development, the fact that eigen has cuda is another plus... interesting! i think i understand... more evidence that eigen could be important for my project... – Noah Smith Jan 09 '17 at 00:35
-
@NoahSmith Instead of `hi Name` please use `@Name`. This way a notification is sent to the user. – chtz Jan 09 '17 at 13:23
-
@chtz ahha thanks, understood, will do. – Noah Smith Jan 10 '17 at 00:36
-
What is the paper mentioned by the OP? – user8469759 May 11 '23 at 04:42
1 Answers
22
I think that one of the key feature that drove the use of Eigen in the first place is because Eigen features its own highly optimized matrix product kernels whereas all other competitors have to be linked to some BLAS libraries. Moreover, the code of Eigen's product kernel is C++ with easy access to low-level internal kernels, so it was 'easy' for them to tweak and extend it to match their needs. This way Google has been able to develop the Tensor module with high CPU performance in a pure header-only fashion. The support for CUDA and now OpenCL via SyCL came later, those are not intrinsic features of Eigen that drove the initial choice.
-
thanks this fits with what i'm seeing as i look further into the eigen code. matrix manipulations are highly visible, not outside in blas. really interesting and motivating to dig deeper. – Noah Smith Jan 10 '17 at 14:36