0

I have a big matrix which is a QuTiP object. I am trying to run this line of code:

ops_numpy = [op.full() for op in m_ops] # convert the QuTiP Qobj to numpy arrays

But I am getting the following error:

MemoryError: Unable to allocate 16.0 TiB for an array with shape (1048576, 1048576) and
data type complex128

Here, m_ops is a list with len(m_ops) = 27 and every m_ops[i] is a quantum object of shape

In [91]: m_ops[1].shape
Out[91]: (1048576, 1048576)

Ok, I can see that I am trying to convert a QuTiP object into a numpy array but this object is so big that I have a memory issue. My question is simple: is there any way to overcome this issue? can I 'cut' the object in smaller pieces to convert it and then put the "pieces back together"?

I really have no idea. Maybe I am not doing in the optimal way but I was working with really smaller matrix until this one and I didn't foresaw this problem.


EDIT with the full code:

"""."""
import numpy as np
import tensorflow as tf
from qutip import tensor
from qutip import sigmax, sigmaz, sigmay
from qutip import coherent, coherent_dm, expect, Qobj, fidelity, hinton
from tqdm.auto import tqdm

#%load_ext autoreload
tf.keras.backend.set_floatx('float64') # Set float64 as the default

# Local paths:
local_path = "0_qst_master/cgan_tf_20qb/%s"
data_path = "0_qst_master/cgan_tf_20qb/data/%s"

# Reading projectors
projs_settings = np.loadtxt(data_path % 'measurement_settings.txt', dtype=str)

X = sigmax()
Y = sigmay()
Z = sigmaz()

m_ops = [] # measurement operators

def string_to_operator(basis):  
    mat_real = []

    for j in range(len(basis)):
        if basis[j] == 'X':
            mat_real.append(X)     
        if basis[j] =='Y':
            mat_real.append(Y)     
        if basis[j] =='-Y':
            mat_real.append(-Y)     
        if basis[j] == 'Z':
            mat_real.append(Z)   
    return mat_real

for i in range(27):
    U = string_to_operator(projs_settings[i])
    U = tensor(U)
    m_ops.append(U)

ops_numpy = [op.full() for op in m_ops] # convert the QuTiP Qobj to numpy arrays

Another EDIT:

The measurement_settings.txt is a .txt file with the following:

enter image description here

Dimitri
  • 109
  • 1
  • 14
  • Can you include the full code you are running? Including imports and how `m_ops` is defined – C.Nivs Jun 07 '23 at 19:54
  • Please include a minimal, complete, and verifiable example. This should be something where we can copy/paste and run your code if possible. As it stands, your object is just far too large to run, but if we can see how you are processing the data to get to where you are, we can tell you if there is a solution – C.Nivs Jun 07 '23 at 19:58
  • Yes, sure. There it is. I edited the question – Dimitri Jun 07 '23 at 20:01
  • I believe that everyhting that is needed to run the code is there in the edits – Dimitri Jun 07 '23 at 20:03
  • Awesome, thanks – C.Nivs Jun 07 '23 at 20:03
  • 1
    Are your `Qobj` objects sparse, i.e., do they typically contain a lot of zeros? If so, you may have a situation where the Qobj is memory-efficient because it isn't explicitly storing zero values, but the corresponding dense NumPy array takes up far more memory. – jjramsey Jun 07 '23 at 20:06
  • @jjramsey yes, actually it does contain a lot of zeros! – Dimitri Jun 07 '23 at 20:15
  • Common `numpy` users know nothing about `QuTiP`. What does this `full` method do? – hpaulj Jun 07 '23 at 20:54
  • 1
    I should insist on seeing the full error message, with traceback. But I went ahead and looked at `QuTiP` docs, Looks like `op.data` is a `scipy.sparse` array (or matrix), and `full` returns `op.data.toarray()`, the dense (normal) numpy array. That desnse array normally is much bigger than the sparse matrix, since it makes all those 0s explicit. So a memory error in this operation is not surprising. There isn't a fix. Just don't try to get these numpy arrays when the `op` shape is large. – hpaulj Jun 07 '23 at 21:14
  • 1
    @hpaulj a QuTiP object is commoly stored as a sparse matrix. The .full() method returns the full matrix. That is, it returns the not sparse matrix – Dimitri Jun 07 '23 at 21:15
  • @hpaulj the problem is that, at this point, I need the numpy array for what I'm doing. Unfortunately. – Dimitri Jun 07 '23 at 21:17
  • Look at those dimensions: `1048576* 1048576*8/1e9` that's 8796.093022208. So for complex128 double that, which is 16Tb. That's what the error complains about. That's way too big more most computers, and all the more so if you try to make multiple arrays of that size. If you don't have memory to create that big of an array, you also don't have memory to do anything with it. – hpaulj Jun 07 '23 at 21:18
  • yes, but I am trying to process it in bacthes. Maybe divide it into bacthes and converting it to numpy array in pieces? but I am not sure on hoe to do it – Dimitri Jun 07 '23 at 21:35
  • What makes you so sure that the `scipy.sparse` array won't work as a NumPy array? Judging from what https://docs.scipy.org/doc/scipy/reference/sparse.html says about an array interface, it probably should. – jjramsey Jun 12 '23 at 20:19

0 Answers0