0

I am having trouble with my python code, my RAM starts to run out rapidly. The problem occurs when the function below is executed:

# for loop to calculate TVD
def TVD_loop(test_i, test_dummy_i, nlayer, best_model):

    TVD_tensor = torch.zeros(test_i.size()[1], (nlayer+1), test_i.size()[0]).float()

    # replace every 0's in TVD_tensor to -2
    TVD_tensor = torch.where(TVD_tensor == 0.0, torch.tensor(-2.0), TVD_tensor)

    for m in range(test_i.size()[1]):

        gc.collect()

        input_ids = test_i[:,m]
        input_ids = torch.tensor(input_ids.tolist()).unsqueeze(0) 

        # NOTE: Hidden states are in torch.FloatTensor,
        #       (one for the output of each layer + the output of the embeddings)
        # jth layer
        for j in range(nlayer+1):

            del gc.garbage[:]
            gc.collect()

            for l in range(m * test_i.size()[0], (m+1) * test_i.size()[0]):

                del gc.garbage[:]
                gc.collect()

                tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]

                input_ids_dummy = test_dummy_i[:,l]
                input_ids_dummy = torch.tensor(input_ids_dummy.tolist()).unsqueeze(0) 

                tst_hidden_states_dummy = best_model(input_ids_dummy)[3][j][0, (test_i.size()[0] - 1), :]

                del input_ids_dummy
                del gc.garbage[:]
                gc.collect()

                # TVD_tensor[i,j,k] denotes for TVD calculated at 
                # batch i, layer j, and dummy output k
                TVD_tensor[m,j,(l % (test_i.size()[0]))] = TVD(tst_hidden_states, tst_hidden_states_dummy)

                del tst_hidden_states
                del tst_hidden_states_dummy
                del gc.garbage[:]
                gc.collect()

                print('l={}, gc_get_count={}'.format(l,gc.get_count()))

            del gc.garbage[:]
            gc.collect()
            print('j={}, gc_get_count={}'.format(j,gc.get_count()))

        del gc.garbage[:]
        del input_ids
        gc.collect()

        print('m={}, gc_get_count={}'.format(m,gc.get_count()))

    return TVD_tensor      

from the code above, when m=0, j=0, l=0, everything is fine, but once m=0, j=1, l=0 is reached, the memory usage starts to accumulate rapidly. The parts tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :] and tst_hidden_states_dummy = best_model(input_ids_dummy)[3][j][0, (test_i.size()[0] - 1), :] is where the most of the memory is consumed. The gc.get_count() output is (1,0,0).

The error message is below:

Traceback (most recent call last):
  File "PhD_Code_Pub1_PennTreeBank_v6.py", line 615, in <module>
    TVD_tensor_penn = TVD_loop(test_penn_iter, test_dummy_penn, nlayer, best_model_ch2_penn)
  File "PhD_Code_Pub1_PennTreeBank_v6.py", line 514, in TVD_loop
    tst_hidden_states_dummy = best_model(input_ids_dummy)[3][j][0, (test_i.size()[0] - 1), :]
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 655, in forward
    inputs_embeds=inputs_embeds)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 460, in forward
    head_mask=head_mask[i])
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 232, in forward
    head_mask=head_mask)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py", line 532, in __call__
    result = self.forward(*input, **kwargs)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 193, in forward
    attn_outputs = self._attn(query, key, value, attention_mask, head_mask)
  File "/home/ec2-user/anaconda3/lib/python3.6/site-packages/transformers/modeling_gpt2.py", line 147, in _attn
    w = w / math.sqrt(v.size(-1))
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 50331648 bytes. Error code 12 (Cannot allocate memory)

How should I fix my code?

Thank you,

chico0913
  • 577
  • 4
  • 10
  • 22
  • 1
    Does your ram run out or does your application use the ram it has available to it? – Sayse Jan 21 '20 at 18:02
  • Hello, my ram runs out. The function uses up all my memory. Each time when ```tst_hidden_states = best_model(input_ids)[3][j][0, (test_i.size()[0] - 1), :]``` and ```tst_hidden_states_dummy = best_model(input_ids_dummy)[3][j][0, (test_i.size()[0] - 1), :]``` are executed, ~ 1GB of RAM is gone. – chico0913 Jan 21 '20 at 18:03
  • 1
    But does your application crash or return to normal post running this function? – Sayse Jan 21 '20 at 18:08
  • Hello, my program crash after running this function. it stops completely with an error message – chico0913 Jan 21 '20 at 18:09
  • What do you mean by saying "return to normal post running this function"? – chico0913 Jan 21 '20 at 18:17
  • @Sayse Hello, I added the error output onto my original post. Thanks, – chico0913 Jan 21 '20 at 18:42

0 Answers0