It is more Haskell-idiomatic to speak not actually of matrices, whose dimension is just a number that tells you little about the structure of the mapping / the spaces it maps between. Instead, matrix multiplication is best treated as the category-composition operation in the category Vectk. Vector spaces are naturally represented by Haskell types; the vector-space
library has this for a long time.
As a composition of linear functions, the dimensions-check is then a corollary of the type-checking that's done anyway for Haskell function compositions. And not only that, you can also distinguish between different spaces that may be incompatible despite having the same dimension – for instance, matrices themselves form a vector space (a tensor space), but the space of 3×3 matrices isn't really compatible with the space of 9-element vectors. In Matlab and other “array languages”, dealing with linear mappings on a space of linear mapping requires error-prone reshaping between tensors of different rank; surely we don't want this in Haskell!
There's one catch: to implement these functions efficiently, you can't just have functions between any spaces, but need a kind of underlying representation that's still matrix-like. That only works when all permitted spaces are actually vector spaces, so you can't use the standard Category
class as that demands an id
between any two types. Instead you need a category class that's actually restricted to vector spaces. That's not really hard to express in modern Haskell though.
Two libraries that have gone this route are:
- Mike Izbicki's subhask, which uses hmatrix matrices internally but exposes a nice high-level interface.
- My own linearmap-category, which uses a dedicated implementation of the tensors spanned by each space.