40

I'm trying a test connection on my Firebase Realtime database via python 3.8. I have two scripts, one is wdata (write data) and the other one is rdata (read data). The wdata.py is:

from firebase import firebase
firebase = firebase.FirebaseApplication("https://test-282f7.firebaseio.com/", None)
datos={
        'id':'99',
        'primer_sensor':'1111',
        'segundo_sensor':'512'
        } 
resultado=firebase.post('/tutorial_firebase/datos_post', datos)
read = firebase.get('/tutorial_firebase/datos_post', datos)

This script returns the same error but it inserts "datos" values in firebase.

The rdata.py is:

from firebase import firebase 
firebase = firebase.FirebaseApplication("https://test- 282f7.firebaseio.com/", None) 
lectura = firebase.get('/tutorial_firebase/datos_post', datos_post) 
print (lectura)

And this code also returns an error. The error is:

/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '

Please can anyone tell me where is the error and how can I fix it?

p.s.:

My python compiler is: Python 3.8.2. (with 3.7 I install firebase but it returns "ModuleNotFoundError") I'm on macOS Catalina 10.15.7 Tried to compile in VS Code and MacVIM but the result is the same.

Thank you advance!

Vazzattacc
  • 437
  • 1
  • 4
  • 7
  • This issue seems to be reported on the official Python bugs site: [link](https://bugs.python.org/issue45209) – Genarito Apr 08 '22 at 22:32

3 Answers3

52

I had the same problem as you when doing deep learning and the problem came from that I was loading too much data in memory. Be sure you don't try to load more data in your RAM than it's capacity.

Ruli
  • 2,592
  • 12
  • 30
  • 40
Yann POURCENOUX
  • 587
  • 3
  • 10
  • Thanks, I just reduced the batch size and tried again, and it worked. – TripleAntigen Dec 20 '20 at 00:02
  • 4
    In my case, this was caused by using `multiprocessing.Queue` without a reasonable maximum size. Too much data was being put into the queue at once. Instantiate it with `multiprocessing.Queue(1000)` or however many items you want in there. – slhck Mar 24 '22 at 10:04
  • I get this error when using the stanza package and doing nlp = stanza.Pipeline() . I dont think its due to memory issues. – SriK Apr 19 '23 at 19:26
9

If this problem has occurred in training (deep learning), it's because of ram capacity. Use a smaller value for -batch parameter.

2

I encountered this error when working with the concurrent.futures library for Python. I was using a ProcessPoolExecutor to process large datasets in parallel.

At first, all my code was on a single file (since it needs to be deployed as a function) and I was protecting the entry point via if __name__ == '__main__': as is the best practice according to the docs.

Once I started testing different configurations for threading and multi-processing I created a separate main.py file so that it could abstract the workflows I was prototyping.

Here is where I forgot to protect main.py with if __name__ == '__main__': and received the following error:

RuntimeError:
        An attempt has been made to start a new process before the
        current process has finished its bootstrapping phase.

        This probably means that you are not using fork to start your
        child processes and you have forgotten to use the proper idiom
        in the main module:

            if __name__ == '__main__':
                freeze_support()
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.

I believe this spawned too many processes which ended up running multiple copies of my data set through the script. This eventually caused the original error which seems to happen to others running deep learning workloads where they suggest the fix is to reduce batch sizes to avoid exceeding memory usage.

The overall lesson here is when using Process Pools beware to protect them as advised by the docs.

stelloprint
  • 303
  • 2
  • 9