2

For example, if A is a double matrix, B is int matrix. A * B raises compiler error and it hast to be A.cast() * B or A * B.cast().

Why does eigen require this? It could have followed double * int = double convention of C++.

Is there a performance optimization for operation of the same scalar type?

Thank you very much!

Hedgehog
  • 77
  • 5
  • 1
    i suppose its just the usual convenience vs wtf tradeoff of implicit conversions. Implicit conversions can cause confusion and bugs. – 463035818_is_not_an_ai Mar 03 '23 at 17:08
  • 1
    implicit conversion between int and double are one of C/C++ shortcomings. So designer of library decided to avoid replication of this problem. – Marek R Mar 03 '23 at 17:20
  • @MarekR I wanna learn more about this. Implicit conversion is shortcoming because it makes easier for users to make implicit numeric computation errors? – Hedgehog Mar 03 '23 at 17:43
  • @Hedgehog for example signed vs unsigned. `void foo(unsigned x):` and a user calls `foo(-1)`. Its no error and not necessarily a warning, but depending on what `foo` does it can be disaster – 463035818_is_not_an_ai Mar 06 '23 at 09:45

1 Answers1

4

The basic integer and floating point automatic conversions are a source of a large number of bugs in C/C++ programs.

So Eigen decided to require explicit conversion.

And yes, the odds are that matrix multiplication is going to be optimized for identical types on both sides. Optimizing for non-identical types results in a lot of different cases; with 10 types, there are 100 different ordered pairs that would have to be generated and optimized.

In comparison, just 10 if they are the same type.

And 15 types is quite plausible: (signed/unsigned) x (8, 16, 32, 64, 128) bit integers, then 8, 16, 32, 64, 128 bit floating point types.

Yakk - Adam Nevraumont
  • 262,606
  • 27
  • 330
  • 524
  • Thank you very much for the answer! Another question - what kind of same-type optimization would be other there? Could you provide some examples? – Hedgehog Mar 03 '23 at 20:10
  • @Hedgehog SIMD is obvious? And at an algorithmic level, Matrix multiplication optimization is a huge subject area. – Yakk - Adam Nevraumont Mar 03 '23 at 20:14