3

I have a .odb file, named plate2.odb, that I want to extract the strain data from. To do this I built the simple code below that loops through the field output E (strain) for each element and saves it to a list.

from odbAccess import openOdb
import pickle as pickle

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a list
E = []
for i in range(1000):
    E.append(odb.steps['Step-1'].frames[0].fieldOutputs['E'].values[i].data)   

# save the data
with open("mises.pickle", "wb") as input_file:
    pickle.dump(E, input_file)

odb.close()

The issue is the for loop that loads the strain values into a list is taking a long time (35 seconds for 1000 elements). At this rate (0.035 queries/second), it would take me 2 hours to extract the data for my model with 200,000 elements. Why is this taking so long? How can I accelerate this?

If I do a single strain query outside any Python loop it takes 0.04 seconds, so I know this is not an issue with the Python loop.

Austin Downey
  • 943
  • 2
  • 11
  • 28

3 Answers3

2

I found out that I was having to reopen the subdirectories in the odb dictionary every time I wanted a strain. Therefore, to fix the problem I saved the odb object as a smaller object. My updated code that takes a fraction of a second to solve is below.

from odbAccess import openOdb
import pickle as pickle

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a list
E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
    E.append(EE.values[i].data)  

# save the data
with open("mises.pickle", "wb") as input_file:
    pickle.dump(E, input_file)

odb.close()
Austin Downey
  • 943
  • 2
  • 11
  • 28
  • 2
    note you can write this even more compactly using list comprehension `E=[v.data for v in EE.values]` (maybe a little performance gain too) – agentp Oct 05 '17 at 01:09
  • 1
    Nice. BTW, this technique (also mentioned in the Abaqus Scripting User's Manual under "Creating objects to hold temporary variables") can be used in any Python script where you would like to avoid repeated reconstructions of a sequence of objects. – Matt P Oct 05 '17 at 15:59
  • Thanks Austin. That is a great technique. After saving it as pickle. next step is how to open and read this pickle file. Can you post your example. I am very interested in it. – roudan Aug 22 '20 at 17:48
2

I would use bulkDataBlocks here. This is much faster than using the value method. Also using Pickle is usually slow and not necessary. Take a look in the C++ Manual http://abaqus.software.polimi.it/v6.14/books/ker/default.htm at the FieldBulkData object. The Python method is the same but at least in Abaqus 6.14 it is not documented in the Python-Scripting-Reference (it is available since 6.13).

For example:

from odbAccess import openOdb
import numpy as np

# import database
odbname = 'plate2'
path = './'
myodbpath = path + odbname + '.odb'
odb = openOdb(myodbpath)

# load the strain values into a numpy array
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']

# get a numpy array with your data 
# Not using np.copy here may work also, but sometimes I encountered some weird bugs
Strains=np.copy(EE.bulkDataBlocks[0].data)

# save the data
np.save('OutputPath',Strains)

odb.close()

Keep in mind, that if you have multiple Element Types there may be more than one bulkDataBlock.

max9111
  • 6,272
  • 1
  • 16
  • 33
  • I've encountered the strange performance of `bulkDataBlacks` as well, I'll see if `np.copy` improves things. – Daniel F Oct 25 '17 at 06:54
  • I encountered no performance problems when using the bulkDataBlocks method. It should be much faster than the value method. I encountered problems when i reshape or slice the numpy array returned by the bulkDataBlocks method. But with copiing the array everything should be OK. – max9111 Oct 25 '17 at 12:17
  • I meant more the random results from manipulating the `bulkDataBlocks`. I hadn't thought of just copying the blocks elsewhere in memory. – Daniel F Oct 26 '17 at 05:34
2

Little late to the party, but I find using operator.attrgetter to be much faster than a for loop or list comprehension in this case

So instead of @AustinDowney

E = []
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
for i in range(1000):
    E.append(EE.values[i].data) 

do this:

from operator import attrgetter
EE = odb.steps['Step-1'].frames[0].fieldOutputs['E']
E = map(attrgetter('data'), EE.values)

This is about the same speed as list comprehension, but is much better if you have multiple attributes you want to extract at once (say coordinates or elementId)

Daniel F
  • 13,620
  • 2
  • 29
  • 55