I'm a math undergrad taking linear algebra. There was a recent court case in the news that denied some video evidence because the footage was zoomed in (the data was interpolated, it "created new pixels"). This got me thinking, how would I linearly interpolate a matrix?
I looked into it and could only find algorithms that used nested for loops, nothing that involved much linear algebra. This surprised me because I thought operations like matrix multiplication were more efficient.
Eventually I figured out how a much simpler (and more satisfying!) way to interpolate a matrix (nearest neighbor and linearly–so far) with linear algebra.
(Basically, if you have a mxn matrix A, then you construct two very simple matrices: a (2m-1)xm matrix L and a nx(2n-1) matrix R, and multiply them L * A * R to get a (2m-1)x(2n-1) matrix with interpolated rows and columns. Designing L and R is fun and easy.)
So why don't programmers use matrix multiplication to interpolate matrices? In theory, shouldn't graphics cards make this a blazing fast calculation when compared to nested for loops?
Or maybe programmers do this already, but because it's so obvious, there's just not much information published on it?
Thanks :D
Edit: It seems my loose phrasing of using nested loops rather than matrix multiplication caused a lot of confusion from. I didn't mean to imply that GPUs didn't loop. I just meant that part would be abstracted away behind some library or perhaps the GPU itself. Good software composes its functions like that. Also, programming this directly through nested loops forfeits optimizations from matrix math algorithms or the GPU.
Whether or not an algorithm for matrix products uses a nested for loop is actually irrelevant. Maybe the algorithm uses a recursive function. Maybe it uses some efficient but counter-intuitive hack, like DOOM's function for inverse squares. It doesn't really matter, and it's silly that this small ambiguity derailed the majority of the discussion.
It turns out that my understanding was mostly (completely?) right. GPUs are better for matrix math, but only for very large matrices.
And my question was thoroughly answered. FFT is much faster than O(n^3). I highly recommend that you read the answer and its comments by @datenwolf