2

Granted that v, a are Eigen::VectorXd vectors with n dimensions, I would like to make the following piece-wise operations:

  • The piece-wise multiplication of v by a, i.e., the vector (a[1]*v[1], ..., a[n]*v[n]), and
  • The piece-wise square of v, i.e., the vector (v[1]*v[1], ..., v[n]*v[n]).

Does Eigen provide methods for the above operations, or do I need to implement them manually? There are certainly very simple, but I would like them to run as fast as possible.

nullgeppetto
  • 539
  • 1
  • 8
  • 25

2 Answers2

8

For dealing with element wise operations such as your question, Eigen provides the Array class. So, to do the operations you asked about you would write the point-wise product as:

c = a.array() * v.array(); // Long version
c = a.cwiseProduct(v);     // Short(er) version

and for the square you have:

s = v.array().square();    // Probably what you want to use
s = v.array().abs2();      // Two operations: abs() then square()
s = v.cwiseAbs2();         // Same as above

Using a VectorXd as an array does not incur a copy, so it is quite efficient.

Avi Ginsburg
  • 10,323
  • 3
  • 29
  • 56
0

EDIT: Avi's answer is definitely a better solution

Well, if you get the first one going, the second one is just a particular case where a=v.

The easiest way of doing the first operation is creating a diagonal matrix from a and doing a normal product.

Taking a look at the docs, you can use a.asDiagonal().

About efficiency, perhaps this isn't what you want if you want it to be 'as fast as possible'. In that case, you should measure this against a loop and vector construction to see if there's any practical difference to you.

villasv
  • 6,304
  • 2
  • 44
  • 78
  • I see, thanks! I was trying to avoid matrix by vector multiplications, since I believe that this is going to be slower than vector by vector ones. But probably I need to implement both ways and measure execution times. – nullgeppetto Dec 19 '15 at 18:12
  • 1
    I'm not familiar with Eigen internals, but the docs mention that `asDiagonal` returns a `DiagonalWrapper` instead of a whole matrix or even a new diagonal matrix itself. So it is possible that it is quite efficient. – villasv Dec 19 '15 at 18:21
  • I'll try that, thanks again. What I want now is to get the diagonal part of a full matrix, which I could implement it myself, I think there is not a similar Eigen function. – nullgeppetto Dec 19 '15 at 18:23