0

I'm not new to numpy, but the solution to this problem has been eluding me for a while now, so I thought that I would ask here.

Given the equation Y = A.T @ X + X @ A

where A and X are square matrices.

construct B such that B @ X.ravel() == Y.ravel()

The problem is just one small part of a bigger code that I am trying to implement so I wrote a minimal reproducible example given below.

The challenge that I pose to you is to fill out the function "create_B" in a way such that the assertion does not raise an error.

The list of things that I have attempted myself is far too long to include in this question, and besides I don't feel that trying to include a list of my own failed attempts adds any value to the question.

import numpy as np


def create_B(A):
    B = np.zeros((A.size, A.size))

    return B


A = np.random.uniform(-1, 1, (100, 100))
X = np.random.uniform(-1, 1, (100, 100))

Y = A.T @ X + X @ A

B = create_B(A)

assert np.allclose(B @ X.ravel(), Y.ravel())

I should probably mention that I can easily get this working using loops, but in reality A is very big and I need to do this many times so I need an optimized solution (ie. no python loops).

Basically I am looking for a solution of the form:

def create_B(A):
    B = np.zeros((A.size, A.size))
    B[indices1] = A
    B[indices2] += A
    return B

Where the challenge then is to create the index tuples indices1 and indices2.

An example of a solution using loops (not what i'm looking for):

def create_B(A):
    B = np.zeros((A.size, A.size))

    indices = np.arange(A.size).reshape(A.shape)

    for i, k in enumerate(indices):
        for j, k in enumerate(k):
            B[k, indices[:, j]] = A[:, i]
            B[k, indices[i, :]] += A[:, j]

    return B

You can confirm for yourself that this is indeed a solution.

Vinzent
  • 1,070
  • 1
  • 9
  • 14

1 Answers1

0

Then your equation is known as continuous and can be solved with scipy.linalg.solve_continous_lyapunov

Try this

Y = A.T @ X + X @ A
X_hat = solve_continuous_lyapunov(A.T, Y)

assert np.allclose(A.T @ X_hat + X_hat @ A, Y)
assert np.allclose(X_hat, X)

If you need B to be computed explicitly you will inevitably end up with a n2 x n2 matrix.

I fear you will not get nothing much better than you have (unless you build it as a sparse matrix)

Bob
  • 13,867
  • 1
  • 5
  • 27
  • "I assume that you meant (B @ X).ravel()" No my question is correct, I do mean B @ X.ravel(). Think of it like this; I need to turn the expression A.T @ X + X @ A into a product of a matrix B and a vector X.ravel(). The purpose of all this is to be able to solve the LMI: A.T @ X + X @ A <= 0 using scipy.optimize.linprog(.), for this I need to formulate the problem as A_ub @ x <= b_ub. Where x is a vector. – Vinzent Nov 28 '21 at 12:06
  • Sorry, I see now, either way, this does not affect my answer. If you give any semidefinite negative matrix $Q$, you will have a $X$ that is a solution for your LMI. – Bob Nov 28 '21 at 12:25