1

Suppose you have a long boost::numeric::ublas::vector and you want to perform an update operation on a subset of the elements. How many of the elements should be updated is somewhere between "all" or "none". Which elements to update is given by a sparse compressed_vector containing a "1" for each element that should be updated.

I could think of two ways to solve this problem:

  1. Just multiplying the right hand side with a mask:

    using namespace boost::numeric::ublas;
    vector<double> x,some,other,stuff;
    compressed_vector<int> update_mask;
    [...]
    noalias(x) += element_prod(update_mask, some+element_div(other,stuff))
    

    Problem with this is that it looks quite inefficient: wouldn't ublas calculate the whole vector and then just throw away all unused values in this case (ie. where update_mask==0)?

    I'd expect it to be even slower than just

    noalias(x) += some+element_div(other,stuff)
    

    which would be horribly inefficient if only a few elements have to be updated.

  2. Loop over all values to update

    [....]
    for(compressed_vector<int>::iterator it = update_mask.begin(); it!=update_mask.end(); ++it)
        x[it.index()] += some[it.index()]+other[it.index()]/stuff[it.index()]);
    

    The problem with this is that a) it looks awful, b) kinda defeats the purpose of using vectors in the first place and c.) should be horribly inefficient if a lot of indices are to be updated and/or the operation becomes more complex.

Any ideas on how to do this efficiently? I'm pretty sure this is a fairly common problem, but I couldn't find anything useful about it (and the ublas documentation is ... not fun).

user20948
  • 11
  • 1
  • Well, you can't optimize for both cases (update_mask being sparse OR dense) at the same time! (I suppose you could, but that would require very elaborate routines to check which way is more efficient.) My guess would be that behind the scenes, things similar to your option (2) are happening. (I'm trying to find out how exactly they handle sparse+dense or sparse*dense, but I don't have prior experience with boost::ublas and the docs aren't too great, so it's taking me some time...) – us2012 Mar 05 '13 at 18:08
  • Thanks for the response. It would work if the expressions containing sparse matrices or sparse vectors were evaluated in a lazy fashion (ie. the second argument of an element-wise sparse*dense or sparse*sparse product is only evaluated at the places where the sparse argument is non-zero; similarly to a vectorized "&&"-operator in contrast to a vectorized "&"-operator). In that case option (1) should give a speedup. I think this would be the most efficient way to implement sparse*dense products, but I doubt it's implemented that way. I'll keep searching, too :) – user20948 Mar 06 '13 at 10:34

0 Answers0