0

I am trying to multiply a vec by a subset of a very big sparse matrix (as the script followed), but it fails to complier when using sourceCpp, it reports error: no matching function for call to ‘arma::SpMat<double>::submat(arma::uvec&, arma::uvec&), it would be much appreciate if someone could do me a favour.

#include <RcppArmadillo.h>

// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::depends(RcppArmadillo)]]

// [[Rcpp::export]]
double myf(sp_mat X, vec g, uvec xi){
    double u = g(xi).t() * X.submat(xi, xi) * g(xi);
    return u;
}
coatless
  • 20,011
  • 13
  • 69
  • 84
YinLL
  • 287
  • 1
  • 3
  • 5

1 Answers1

3

So, as @RalfStubner mentioned, the matrix access for sparse matrices is continuous only. That said, the access approach taken is symmetric for the actual sparse matrix since the same index is being used. So, in this case, it makes sense to revert back to a standard element accessor of (x,y). As a result, the summation reduction can be done with a single loop.

#include <RcppArmadillo.h>

// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::depends(RcppArmadillo)]]

// [[Rcpp::export]]
double submat_multiply(const arma::sp_mat& X, 
                       const arma::vec& g, const arma::uvec& xi){

  // Add an assertion
  if(X.n_rows != g.n_elem) {
    Rcpp::stop("Mismatched row and column dimensions of X (%s) and g (%s).",
               X.n_rows, g.n_elem);
  }

  // Reduction
  double summed = 0; 

  for (unsigned int i = 0; i < xi.n_elem; ++i) {
    // Retrieve indexing element
    arma::uword index_at_i = xi(i);
    // Add components together
    summed += g(index_at_i) * X(index_at_i, index_at_i) * g(index_at_i);
  }

  // Return result
  return summed;
}

Another approach, but potentially more costly, would be to extract out the diagonal of the sparse matrix and convert it to a dense vector. From there apply an element-wise multiplication and summation.

#include <RcppArmadillo.h>

// [[Rcpp::plugins(cpp11)]]
// [[Rcpp::depends(RcppArmadillo)]]

// [[Rcpp::export]]
double submat_multiply_v2(const arma::sp_mat& X, 
                          const arma::vec& g, const arma::uvec& xi){

  // Add an assertion
  if(X.n_rows != g.n_elem) {
    Rcpp::stop("Mismatched row and column dimensions of X (%s) and g (%s).",
               X.n_rows, g.n_elem);
  }

  // Copy sparse diagonal to dense vector
  arma::vec x_diag(X.diag());
  // Obtain the subset
  arma::vec g_sub = g.elem(xi); 

  // Perform element-wise multiplication and then sum.
  double summed = arma::sum(g_sub % x_diag.elem(xi) % g_sub);

  // Return result
  return summed;
}

Test code:

# Sparse matrix
library(Matrix)
i <- c(1,4:8,10); j <- c(2, 9, 6:10); x <- 7 * (1:7)
X <- sparseMatrix(i, j, x = x)  

X
# 10 x 10 sparse Matrix of class "dgCMatrix"
#                               
#  [1,] . 7 . . .  .  .  .  .  .
#  [2,] . . . . .  .  .  .  .  .
#  [3,] . . . . .  .  .  .  .  .
#  [4,] . . . . .  .  .  . 14  .
#  [5,] . . . . . 21  .  .  .  .
#  [6,] . . . . .  . 28  .  .  .
#  [7,] . . . . .  .  . 35  .  .
#  [8,] . . . . .  .  .  . 42  .
#  [9,] . . . . .  .  .  .  .  .
# [10,] . . . . .  .  .  .  . 49

# Vector
g <- 1:10

# Indices
xi <- c(0, 3, 4, 9)

# Above function
submat_multiply(X, g, xi)
# [1] 4900

submat_multiply_v2(X, g, xi)
# [1] 4900
coatless
  • 20,011
  • 13
  • 69
  • 84
  • That's nice. Maybe also assert (row) dimensions of `g` and `X`? – Dirk Eddelbuettel Nov 04 '19 at 18:28
  • Thanks a lot. I would have thought there maybe another ready-made function for it in RcppArmadillo. With your guidelines, I write a two loop function to achieve it ```double spsubmat_multiply(const sp_mat X, const vec g, const uvec windxi){ vec gmtpX(windxi.n_elem); double summed, varw; for(int i = 0; i < windxi.n_elem; i++){ summed = 0; for(int j = 0; j < windxi.n_elem; j++){ if(g[windxi[j]]){summed += g[windxi[j]] * X(windxi[j], windxi[i]);} } gmtpX[i] = summed; } varw = sum(gmtpX % g(windxi)); return varw; } ``` – YinLL Nov 05 '19 at 05:25
  • That comment is unreadable. Consider adding your code segment as an edit to your question. – Dirk Eddelbuettel Nov 05 '19 at 12:03