3

I have a whole bunch of code interacting with hdf files through h5py. The code has been working for years. Recently, with a change in python environments, I am receiving this new error message.

IOError: Unable to open file (unable to lock file, errno = 11, error message = 'Resource temporarily unavailable')

What is interesting is the error occurs intermittently in some places and persistently in others. In places where it is occuring routinely, I have looked at my code and confirm that there is no other h5py instance connected to the file and that the last connection was properly flushed and closed. Again this was all working fine prior to the environment change.

Heres snippets from my conda environment:

h5py 2.8.0 py27h470a237_0 conda-forge hdf4 4.2.13 0 conda-forge hdf5 1.10.1 2 conda-forge

user2611761
  • 169
  • 1
  • 1
  • 11

5 Answers5

2

In terms of my version of this issue, it failed to close the file in an obscure method. Interesting thing is that unlocking the file in some cases just took a restart of ipython, other times took a full reboot.

Nike
  • 1,223
  • 2
  • 19
  • 42
user2611761
  • 169
  • 1
  • 1
  • 11
1

with h5py.File(), the same .h5 file can be open for read ("r") multiple times. But h5py doesn't support more than a single thread. You can experience bad data with multiple concurrent readers.

Machuck
  • 11
  • 1
1

I had other process running that I did not realize. How did I solved my problem:

  1. Used ps aux | grep myapp.py to find the process number that was running myapp.py.
  2. Kill the process usill the kill command
  3. Run again
Matheus Araujo
  • 5,551
  • 2
  • 22
  • 23
0

Similar as with the other answers I had already opened the file, but for me it was in a separate HDF5 viewer.

Florian Brucker
  • 9,621
  • 3
  • 48
  • 81
0

For me, I used multiprocessing to parallelise my data processing, and the file handle is passed to the multiprocessing pool. As a result, even if I called close(), the file would not be closed until all the subprocesses spawned by the multiprocessing pool are terminated.

Remember to call join and close if you are using multiprocessing.

        pool = multiprocessing.Pool(processes=multiprocessing.cpu_count())
        task_iter = pool.imap(...) # <- file is used in pool!
        ...
        pool.close()
        pool.join()
Sean
  • 1,055
  • 11
  • 10