Consider the following weighted solution to the normal equation for a least squares inverse problem:
m = inv(G'*W'*W*G)*G'*W'*W*d
I would like to set up the weighting matrix W
, which is a square diagonal matrix with weights on the diagonal.
As I have a large number of data points in d (10⁷), my system matrix G
is also large, but only in one dimension (as I have far more data points than model parameters). In the case of 6 model parameters, G
is of size (10⁷ × 6). Hence, W
has to be of size (10⁷ × 10⁷). However, it is sparse, with only 10⁷ non-zero entries (the weights).
To reduce memory, I use sparse on W
.
To assign weights, I do the following
d = [d1;d2];
W = sparse(length(d),length(d))
w1 = length(d2)/length(d);
w2 = length(d1)/length(d);
W(1:length(d)+1:length(d)*length(d1)) = w1;
W(length(d)*length(d1)+1:length(d)+1:end) = w2;
d1
and d2
are column vectors with observations.
This will assign weights to the diagonal, but it is awfully slow.
My question:
Can I either
- Somehow speed up the assignments of weights to the diagonal, or
- Re-write
m = inv(G'*W'*W*G)*G'*W'*W*d
so that I do not have to set-upW
at all?
Note 1: In the shown weights are two different constants, but in practice they will vary allowing the diagonal!
Note 2: The bottle neck of the code is indeed setting up W
, not the inversion itself, as the inverted matrix is only of size (6 × 6).