64

What is the status of the matrix class in NumPy?

I keep being told that I should use the ndarray class instead. Is it worth/safe using the matrix class in new code I write? I don't understand why I should use ndarrays instead.

kmario23
  • 57,311
  • 13
  • 161
  • 150

1 Answers1

91

tl; dr: the numpy.matrix class is getting deprecated. There are some high-profile libraries that depend on the class as a dependency (the largest one being scipy.sparse) which hinders proper short-term deprecation of the class, but users are strongly encouraged to use the ndarray class (usually created using the numpy.array convenience function) instead. With the introduction of the @ operator for matrix multiplication a lot of the relative advantages of matrices have been removed.

Why (not) the matrix class?

numpy.matrix is a subclass of numpy.ndarray. It was originally meant for convenient use in computations involving linear algebra, but there are both limitations and surprising differences in how they behave compared to instances of the more general array class. Examples for fundamental differences in behaviour:

  • Shapes: arrays can have an arbitrary number of dimensions ranging from 0 to infinity (or 32). Matrices are always two-dimensional. Oddly enough, while a matrix can't be created with more dimensions, it's possible to inject singleton dimensions into a matrix to end up with technically a multidimensional matrix: np.matrix(np.random.rand(2,3))[None,...,None].shape == (1,2,3,1) (not that this is of any practical importance).
  • Indexing: indexing arrays can give you arrays of any size depending on how you are indexing it. Indexing expressions on matrices will always give you a matrix. This means that both arr[:,0] and arr[0,:] for a 2d array gives you a 1d ndarray, while mat[:,0] has shape (N,1) and mat[0,:] has shape (1,M) in case of a matrix.
  • Arithmetic operations: the main reason for using matrices in the old days was that arithmetic operations (in particular, multiplication and power) on matrices performs matrix operations (matrix multiplication and matrix power). The same for arrays results in elementwise multiplication and power. Consequently mat1 * mat2 is valid if mat1.shape[1] == mat2.shape[0], but arr1 * arr2 is valid if arr1.shape == arr2.shape (and of course the result means something completely different). Also, surprisingly, mat1 / mat2 performs elementwise division of two matrices. This behaviour is probably inherited from ndarray but makes no sense for matrices, especially in light of the meaning of *.
  • Special attributes: matrices have a few handy attributes in addition to what arrays have: mat.A and mat.A1 are array views with the same value as np.array(mat) and np.array(mat).ravel(), respectively. mat.T and mat.H are the transpose and conjugate transpose (adjoint) of the matrix; arr.T is the only such attribute that exists for the ndarray class. Finally, mat.I is the inverse matrix of mat.

It's easy enough writing code that works either for ndarrays or for matrices. But when there's a chance that the two classes have to interact in code, things start to become difficult. In particular, a lot of code could work naturally for subclasses of ndarray, but matrix is an ill-behaved subclass that can easily break code that tries to rely on duck typing. Consider the following example using arrays and matrices of shape (3,4):

import numpy as np

shape = (3, 4)
arr = np.arange(np.prod(shape)).reshape(shape) # ndarray
mat = np.matrix(arr) # same data in a matrix
print((arr + mat).shape)           # (3, 4), makes sense
print((arr[0,:] + mat[0,:]).shape) # (1, 4), makes sense
print((arr[:,0] + mat[:,0]).shape) # (3, 3), surprising

Adding slices of the two objects is catastrophically different depending on the dimension along which we slice. Addition on both matrices and arrays happens elementwise when the shapes are the same. The first two cases in the above are intuitive: we add two arrays (matrices), then we add two rows from each. The last case is really surprising: we probably meant to add two columns and ended up with a matrix. The reason of course is that arr[:,0] has shape (3,) which is compatible with shape (1,3), but mat[:.0] has shape (3,1). The two are broadcast together to shape (3,3).

Finally, the largest advantage of the matrix class (i.e. the possibility to concisely formulate complicated matrix expressions involving a lot of matrix products) was removed when the @ matmul operator was introduced in python 3.5, first implemented in numpy 1.10. Compare the computation of a simple quadratic form:

v = np.random.rand(3); v_row = np.matrix(v)
arr = np.random.rand(3,3); mat = np.matrix(arr)

print(v.dot(arr.dot(v))) # pre-matmul style
# 0.713447037658556, yours will vary
print(v_row * mat * v_row.T) # pre-matmul matrix style
# [[0.71344704]]
print(v @ arr @ v) # matmul style
# 0.713447037658556

Looking at the above it's clear why the matrix class was widely preferred for working with linear algebra: the infix * operator made the expressions much less verbose and much easier to read. However, we get the same readability with the @ operator using modern python and numpy. Furthermore, note that the matrix case gives us a matrix of shape (1,1) which should technically be a scalar. This also implies that we can't multiply a column vector with this "scalar": (v_row * mat * v_row.T) * v_row.T in the above example raises an error because matrices with shape (1,1) and (3,1) can't be multiplied in this order.

For completeness' sake it should be noted that while the matmul operator fixes the most common scenario in which ndarrays are suboptimal compared to matrices, there are still a few shortcomings in handling linear algebra elegantly using ndarrays (although people still tend to believe that overall it's preferable to stick to the latter). One such example is matrix power: mat ** 3 is the proper third matrix power of a matrix (whereas it's the elementwise cube of an ndarray). Unfortunately numpy.linalg.matrix_power is quite more verbose. Furthermore, in-place matrix multiplication only works fine for the matrix class. In contrast, while both PEP 465 and the python grammar allow @= as an augmented assignment with matmul, this is not implemented for ndarrays as of numpy 1.15.

Deprecation history

Considering the above complications concerning the matrix class there have been recurring discussions of its possible deprecation for a long time. The introduction of the @ infix operator which was a huge prerequisite for this process happened in September 2015. Unfortunately the advantages of the matrix class in earlier days meant that its use spread wide. There are libraries that depend on the matrix class (one of the most important dependent is scipy.sparse which uses both numpy.matrix semantics and often returns matrices when densifying), so fully deprecating them has always been problematic.

Already in a numpy mailing list thread from 2009 I found remarks such as

numpy was designed for general purpose computational needs, not any one branch of math. nd-arrays are very useful for lots of things. In contrast, Matlab, for instance, was originally designed to be an easy front-end to linear algebra package. Personally, when I used Matlab, I found that very awkward -- I was usually writing 100s of lines of code that had nothing to do with linear algebra, for every few lines that actually did matrix math. So I much prefer numpy's way -- the linear algebra lines of code are longer an more awkward, but the rest is much better.

The Matrix class is the exception to this: is was written to provide a natural way to express linear algebra. However, things get a bit tricky when you mix matrices and arrays, and even when sticking with matrices there are confusions and limitations -- how do you express a row vs a column vector? what do you get when you iterate over a matrix? etc.

There has been a bunch of discussion about these issues, a lot of good ideas, a little bit of consensus about how to improve it, but no one with the skill to do it has enough motivation to do it.

These reflect the benefits and difficulties arising from the matrix class. The earliest suggestion for deprecation I could find is from 2008, although partly motivated by unintuitive behaviour that has changed since (in particular, slicing and iterating over a matrix will result in (row) matrices as one would most likely expect). The suggestion showed both that this is a highly controversial subject and that infix operators for matrix multiplication are crucial.

The next mention I could find is from 2014 which turned out to be a very fruitful thread. The ensuing discussion raises the question of handling numpy subclasses in general, which general theme is still very much on the table. There is also strong criticism:

What sparked this discussion (on Github) is that it is not possible to write duck-typed code that works correctly for:

  • ndarrays
  • matrices
  • scipy.sparse sparse matrixes

The semantics of all three are different; scipy.sparse is somewhere between matrices and ndarrays with some things working randomly like matrices and others not.

With some hyberbole added, one could say that from the developer point of view, np.matrix is doing and has already done evil just by existing, by messing up the unstated rules of ndarray semantics in Python.

followed by a lot of valuable discussion of the possible futures for matrices. Even with no @ operator at the time there is a lot of thought given to the deprecation of the matrix class and how it might affect users downstream. As far as I can tell this discussion has directly led to the inception of PEP 465 introducing matmul.

In early 2015:

In my opinion, a "fixed" version of np.matrix should (1) not be a np.ndarray subclass and (2) exist in a third party library not numpy itself.

I don't think it's really feasible to fix np.matrix in its current state as an ndarray subclass, but even a fixed matrix class doesn't really belong in numpy itself, which has too long release cycles and compatibility guarantees for experimentation -- not to mention that the mere existence of the matrix class in numpy leads new users astray.

Once the @ operator had been available for a while the discussion of deprecation surfaced again, reraising the topic about the relationship of matrix deprecation and scipy.sparse.

Eventually, first action to deprecate numpy.matrix was taken in late November 2017. Regarding dependents of the class:

How would the community handle the scipy.sparse matrix subclasses? These are still in common use.

They're not going anywhere for quite a while (until the sparse ndarrays materialize at least). Hence np.matrix needs to be moved, not deleted.

(source) and

while I want to get rid of np.matrix as much as anyone, doing that anytime soon would be really disruptive.

  • There are tons of little scripts out there written by people who didn't know better; we do want them to learn not to use np.matrix but breaking all their scripts is a painful way to do that

  • There are major projects like scikit-learn that simply have no alternative to using np.matrix, because of scipy.sparse.

So I think the way forward is something like:

  • Now or whenever someone gets together a PR: issue a PendingDeprecationWarning in np.matrix._init_ (unless it kills performance for scikit-learn and friends), and put a big warning box at the top of the docs. The idea here is to not actually break anyone's code, but start to get out the message that we definitely don't think anyone should use this if they have any alternative.

  • After there's an alternative to scipy.sparse: ramp up the warnings, possibly all the way to FutureWarning so that existing scripts don't break but they do get noisy warnings

  • Eventually, if we think it will reduce maintenance costs: split it into a subpackage

(source).

Status quo

As of May 2018 (numpy 1.15, relevant pull request and commit) the matrix class docstring contains the following note:

It is no longer recommended to use this class, even for linear algebra. Instead use regular arrays. The class may be removed in the future.

And the documentation page for standard array subclasses says

It is strongly advised not to use the matrix subclass. As described below, it makes writing functions that deal consistently with matrices and regular arrays very difficult. Currently, they are mainly used for interacting with scipy.sparse. We hope to provide an alternative for this use, however, and eventually remove the matrix subclass.

At the same time a PendingDeprecationWarning has been added to matrix.__new__. Unfortunately, deprecation warnings are (almost always) silenced by default, so most end-users of numpy will not see this strong hint.

Finally, the numpy roadmap as of November 2018 mentions multiple related topics as one of the "tasks and features [the numpy community] will be investing resources in":

Some things inside NumPy do not actually match the Scope of NumPy.

  • A backend system for numpy.fft (so that e.g. fft-mkl doesn’t need to monkeypatch numpy)
  • Rewrite masked arrays to not be a ndarray subclass – maybe in a separate project?
  • MaskedArray as a duck-array type, and/or
  • dtypes that support missing values
  • Write a strategy on how to deal with overlap between numpy and scipy for linalg and fft (and implement it).
  • Deprecate np.matrix

It's likely that this state will stay as long as larger libraries/many users (and in particular scipy.sparse) rely on the matrix class. However, there's ongoing discussion to move scipy.sparse to depend on something else, such as pydata/sparse.

In SciPy 1.8 (released February 2022) a sparse array API was introduced for early testing and feedback, with the potential to remove the np.matrix legacy eventually. This replicates the SciPy sparse containers with an interface that matches the behaviour of NumPy arrays (rather than matrices). Maintainers of downstream libraries such as NetworkX and scikit-learn are eager to switch to the new API as soon as possible.

Irrespective of the developments of the deprecation process users should use the ndarray class in new code and preferably port older code if possible. Eventually the matrix class will probably end up in a separate package to remove some of the burdens caused by its existence in its current form.

  • 2
    I don't see `scipy.sparse` as depending on `np.matrix`. Yes it is, as implemented restricted to 2d, and its use of operators is model on the `np` version. But none of the sparse formats is a subclass of `np.matrix`. And the converter to `np.matrix`, `sparse.todense` is actually implemented as `np.asmatrix(M.toarray())`. – hpaulj Nov 12 '18 at 01:06
  • 1
    Originally `sparse` was created for linear algebra, with `csr` and `csc` being central, and other formats serving as creation tools. It was modeled on the MATLAB code, which as far as I can tell is limited to `csc` format. However `sparse` is getting more use in machine learning and big data uses. `sklearn` has a set of its own sparse utilities. I don't know if those other uses benefit from nd sparse arrays or not. Perhaps tangentially `pandas` has its own version(s) of sparsity (series and dataframe). – hpaulj Nov 12 '18 at 01:14
  • @hpaulj as I understand the issue is beyond dependency. There were mentions in the above email threads that the _semantics_ of scipy.sparse also closely mimics that of `matrix` (I'll try to find the relevant messages). As well as examples where implicit operations leading to a dense result created numpy matrices; though this might not be the case anymore considering the age of some of the email threads. Anyway, `scipy.sparse` as a (if not _the_) major downstream consideration keeps coming up in discussions concerning the future of `matrix`. – Andras Deak -- Слава Україні Nov 12 '18 at 01:20
  • @hpaulj [found it](https://mail.python.org/pipermail/numpy-discussion/2017-January/076316.html). "_The major problem I have with removing numpy matrices is the effect on scipy.sparse, which mostly-consistently mimics numpy.matrix semantics and often produces numpy.matrix results when densifying._" from 2017 to which Ralf replied "_I think we're stuck with scipy.sparse, and may at some point will add a new sparse *array* implementation next to it. For scipy we will have to add a dependency on the new npmatrix package or vendor it_". – Andras Deak -- Слава Україні Nov 12 '18 at 01:25
  • 1
    Row and column sums of sparse matrices do return dense matrices. I'd have to check the implementation but I doubt if that's a deep dependency. – hpaulj Nov 12 '18 at 02:05
  • 1
    As someone who's more on the application side of using `numpy` - thank goodness. Between parsing code and chasing errors based on conflating `ndarray` and `matrix`, and trying to do higher-dimensionality tensor algebra with a language that often seems to assume that 2D `matrix` is "good enough," this bifurcation has been a huge headache since I started using `numpy`. A big thanks to those doing the difficult coding I know must be going on in the background to get this done. – Daniel F Nov 12 '18 at 07:32
  • 3
    I particularly like that infinity = 32 – pipe Nov 12 '18 at 10:05
  • As someone coming from Matlab, and who will be using numpy for linear algebra a lot, I wonder if there is a tutorial how to do efficient and *elegant* LA with numpy, especially since one isn't supposed to use `matrix`? I stumble already over simple things like creating a column vector. Maybe I can get away without them, but I value very much the conceptual clarity that comes with the distinction between row and column vectors. – A. Donda May 20 '19 at 23:12
  • @A.Donda my experience is that you will be much more productive if you let go of row and column vectors. For instance `v @ A` and `A @ v` both work for 1d arrays `v` I believe, just as if they were row/column vectors, respectively. For more complex things you'll need either broadcasting or `einsum`, using again a different kind of logic. But I understand that it might take getting used to (though personally, I made the switch very easily and never looked back). – Andras Deak -- Слава Україні May 20 '19 at 23:24
  • @AndrasDeak, well I may be wrong, but I not only program linear algebra, I think in it. I develop data analysis methods using symbolic mathematical derivations using linear algebra, and then implement them. From that perspective, I would actually prefer if one of those two expression with `@` wouldn't work, because the corresponding mathematical expression doesn't make sense. What I like about Matlab is that I can read the code almost as I read the math, and if the code doesn't work (produces an error), it tells me that I thought wrong. That is very valuable to me. – A. Donda May 20 '19 at 23:31
  • 2
    @A.Donda having given it some thought: you can use arrays of shape `(1,n)` and `(n,1)` to restrict operations the same way you wanted the `matrix` class to work. Consider `vrow = np.random.rand(3)[None,:]; vcol = np.random.rand(3)[:,None]; M = np.random.rand(3,3)`. The resulting arrays will only obey linear algebra, and the singleton dimensions will be preserved, so `vrow @ vcol` is a 2d array of shape `(1,1)` and `vcol @ vrow` is a 2d array of shape `(3,3)`. There might be some performance hit from using matrix rather than vector dot, but the semantics should be preserved the way you preer. – Andras Deak -- Слава Україні May 22 '19 at 18:06
  • That looks exactly like what I was looking for, thanks! So I can assume that a 2D array behaves like a matrix, and when I have a 1D array, I can convert it into a row- or column-vector by using `[None, :]` and `[:, None]`, respectively? – A. Donda May 22 '19 at 18:16
  • Btw. since I piggybacked your answer, if you like I could post a new question and you can post your comment as an answer. – A. Donda May 22 '19 at 18:17
  • @A.Donda regarding your first question: yes, a 2d array (with one singleton dimension) is a matrix. But there can be surprises, for instance taking a slice (row or column) of a proper matrix will be a 1d array, so you need to take care not to lose dimensions. And yes, that is a short-hand for injecting singleton dimensions (perhaps you'd prefer the explicit `onedarray.reshape(1,-1)` and `...(-1,1)` to say the same thing.). As for your second comment: thanks, but as far as I'm concerned I'm fine the way we are (there might be duplicates of such a question already). – Andras Deak -- Слава Україні May 23 '19 at 19:45