0

My goal is to make a tridiagonal matrix in an efficient way using either list comprehensions or a more efficient algorithm but I am fairly new to Haskell. I am attempting to solve boundary value problems using finite difference methods.

So I have the following function:

createTriDiagonalMatrixList :: [Double] -> [Double]
createTriDiagonalMatrixList [] = []
createTriDiagonalMatrixList (x:y:z:zs) = (replicate rep1 0) ++ [x, y, z] ++ (replicate rep2 0) ++ (createTriDiagonalMatrixList $ zs)
  where
    len = length $ zs
    rep1 = if n - len `div` 3 < n then n - len `div` 3 else (n - 1)
    rep2 = if rep1 < 0 then (n + 2) - 3 else (n + 2) - (rep1 + 3)

where n is the internal grid points so the matrix A should be (n+2)x(n+2). This function will take a list of diagonal entries (i.e. [1,2,3,4,5,6]) and turn it into [1,2,3,0,0,4,5,6] such that I can then feed it to hmatrix's "constructor" to make a matrix.

I, however, do not need it to be in this form but rather any form such as [[1,2,3,0],[0,4,5,6]] or the original form.

So, I would really appreciate tips or a list comprehension and explanation because I am not really sure how to do this since it seems I can only do list comprehensions with [1..n] and I need the other parts of the matrix to be 0 for it to be tridiagonal.

Any other pointers on my code would be very helpful because I am new to Haskell.

Justin
  • 41
  • 6

1 Answers1

3

First: you should not build a tridiagonal matrix this way. I know, numerical methods like FDM are always explained in terms of matrices (because everybody in the field is Matlab-brainwashed), but this is
a) inelegant, since the underlying maths is about local operations (approximations of differential operators)
b) actually also super inefficient when using dense matrices. For elliptical BVPs specifically, you're probably going to use a conjugate gradient solver, which doesn't require matrices at all but just a linear mapping, and that is much more efficiently implemented with stencil operations on the state vectors. It can also be done with sparse matrices, but that's really just a hack to be able to use the same dumb languages that insist on calling everything “matrix”.

That said... sure you can get the behaviour you asked for. Basically all you need to change is the return type, to a nested list, and instead of prepending the current row you cons it to the rest of the matrix, using the : operator.

createTriDiagonalMatrixList :: [Double] -> [[Double]]
createTriDiagonalMatrixList [] = []
createTriDiagonalMatrixList (x:y:z:zs)
     = (replicate rep1 0 ++ [x, y, z] ++ replicate rep2 0) 
          : createTriDiagonalMatrixList zs

Other remarks about your code:

  • You don't need to use the $ operator just to apply a function to its argument. length $ zs ≡ length (zs) ≡ length zs. You only need parens or $ if you need some particular precedence. Function application binds more tightly than any infix, so the parentheses in (replicate rep1 0) ++ [x, y, z] are superfluous.

  • if logic usually is more readable if written with guard syntax. That can be used on any binding, regardless of whether it defines a function or just a local variable:

    createTriDiagonalMatrixList (x:y:z:zs)
         = (replicate rep1 0 ++ [x, y, z] ++ replicate rep2 0) 
              : createTriDiagonalMatrixList zs
      where
        len = length zs
        rep1
         | n - len`div`3 < n  = n - len`div`3
         | otherwise          = n - 1
        rep2
         | rep1 < 0   = (n + 2) - 3    -- these parens are also superfluous, but you may keep them for clarity
         | otherwise  = (n + 2) - (rep1 + 3)
    

    Actually this could be further simplified by using max and min, but it's a matter of taste.

leftaroundabout
  • 117,950
  • 5
  • 174
  • 319
  • Oh I see... I am matlab brain washed. My entire graduate career was in matlab and I’m trying to revert course. Thank you. I looked into using mappings. Pretty cool. – Justin Feb 16 '21 at 13:46
  • Haha! No offense meant. Truth is, Matlab _is_ still more practical to use for this kind of stuff than Haskell, because so much more work / money is pumped into it. (Likewise for Python and R.) Haskell _could_ do all this much better thanks to its type system, but the few people who actually use it for numerical work have a hard enough time keeping up with the matrix languages. – leftaroundabout Feb 16 '21 at 14:57
  • What I keep complaining about is that even they hardly _try_ to actually make use of Haskell's abstraction capabilities in a better way, but just wrap the standard matrix approaches in a not-all-that-useful compile-time array dimension checker. —But then, maybe I'm just utopist – my own approaches in that direction haven't really succeeded either. – leftaroundabout Feb 16 '21 at 14:58