0

I am reading the paper "Numerics of Gram Schmidt orthogonal", Then I noticed there are some terminologies that I can't understand which are "row-oriented" and "column-oriented" located at P299 in the paper.

the original sentence is "In row-oriented MGS a sequence of matrices, A=A1, A2, ..., An is computed...".

I googled and found that "row/column-oriented" is mainly used in databases. one prone-understandable source is this post.

Then I found that "row/column-major" in matrix theory which I can understand is the store-type in your computer that can be found in this post. based on the context of where it appears in the paper I am pretty sure "row/column-oriented" is not "row/column-major". (If so why does the algorithm still access data by column in the "row_oriented" matrix?)

C Lei
  • 428
  • 4
  • 6
  • Glancing a the GS Wiki page, I see calculations that iterate on the columns of the matrix ("vectors"). So I can imagine iterating on the rows instead. This isn't a matter of array storage, but calculation methods. – hpaulj Oct 02 '21 at 14:57
  • Why the `sparse-matrix` tag? Sparse matrices can be stored in CSR, Compressed Sparse Row matrix format. The alternative is CSC, That's a storage method that's useful for calculations. But it isn't part of GS - unless you are doing GS on a sparse matrix. – hpaulj Oct 02 '21 at 15:00
  • sry mate, I am afraid that your understanding from Wiki is not matched the meaning in the paper. In the row version, we subtract every a_j's projection on the current q every iteration. In the column version, we subtract a_j's projection on every q which has already been computed every iteration. The detail code could be checked in : https://i.stack.imgur.com/NkMhK.jpg (originated by the paper). – C Lei Oct 04 '21 at 07:55
  • As for your second question, I found these terminologies are also appeared in some papers talking about sparse matrices which used GPU to accelerate computing. – C Lei Oct 04 '21 at 07:57

0 Answers0