2

Is there a way to generate a random positive semi-definite matrix with given eigenvalues and eigenvectors in Python?

I looked at this, but they do not allow to specify eigenvalues for matrix construction.

Context: I want to generate random multivariate Gaussians with controlled ellipticity and because the major/minor axes of the distribution have the length proportional to eigenvalues I want my covariance matrix to have them. Definiton could be found here (page 81).

Laurynas Tamulevičius
  • 1,479
  • 1
  • 11
  • 25

1 Answers1

4

When you don't have the eigenvectors but only want some eigenvalues, you can list your desired eigenvalues and use a orthonormal matrix to jumble them up. Since congruence transformations don't change the inertia of a matrix (well up to numerical precision) you can use the Q matrix of the QR decomposition of a random matrix (or any other way to generate an orthonormal matrix).

import numpy as np
import scipy.linalg as la
des = [1, 0, 3, 4, -2, 0, 0]
n = len(des)
s = np.diag(des)
q, _ = la.qr(np.random.rand(n, n))
semidef = q.T @ s @ q
np.linalg.eigvalsh(semidef)

gives

array([-2.00000000e+00, -2.99629568e-16, -5.50063275e-18,  2.16993906e-16,
        1.00000000e+00,  3.00000000e+00,  4.00000000e+00])

When you actually have also the eigenvectors then you can simply construct the original matrix anyways which is the definition of eigenvalue decomposition.

percusse
  • 3,006
  • 1
  • 14
  • 28
  • 2
    *"... or any other way to generate an orthonormal matrix"*: [`scipy.stats.ortho_group`](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ortho_group.html). – Warren Weckesser Mar 30 '18 at 13:11
  • @WarrenWeckesser That function name is really unfortunate. But really good to know thanks. – percusse Mar 30 '18 at 13:20
  • @percusse I was looking a bit more into this and was puzzled how the values of eigenvalues are preserved after the transformation because Sylvester's law of inertia only mentions that the number of +ve, -ve and 0 eigenvalues is preserved. Is it because we apply orthogonal transformation? – Laurynas Tamulevičius Mar 30 '18 at 17:14
  • @LaurynasTamulevičius Yes basically they are essentially bunch of weighted dot products. – percusse Mar 30 '18 at 17:23
  • @percusse thanks, do you know if there's a formal proof for this? Because I am writing a project and need to justify that – Laurynas Tamulevičius Mar 30 '18 at 18:04
  • @LaurynasTamulevičius You can use the fact `q.T = inv(q)` and it becomes a similarity transformation.If we only had orthogonal q then eigenvalues would have moved – percusse Mar 30 '18 at 18:22