3

EDIT 3: See last example at the end.

I need a while loop doing continuous send and return operations with an USB connection. During this continuous operation I need (amongst other stuff in my main script) a few identical and isolated send/return operations on that same USB connection. This seems to require multiprocessing and some tweaking.

I want to use the following workaround with the multiprocessing library:

  1. Put the continuous send/return operation on a different thread with a pool (apply_async).
  2. Put this process on "hold" when I perform the isolated send/return operation (using clear()).
  3. Immediately after the isolated send/return operation resume the continuous send/return (using set()).
  4. Stop the continuous send/return when i reach the end of the main script (here i have no solution yet should be x.stop() or something like this since terminate() won't do).
  5. Get some return value from the stopped process (use get()).

I tried couple of things already but i just cant exit the while loop via a main command.

   import multiprocessing
   import time

   def setup(event):
       global unpaused
       unpaused = event
   
   class func:
       def __init__(self):
           self.finished = False
       
       def stop(self):
           self.finished = True
           
       def myFunction(self, arg):
           i = 0
           s=[]
           while self.finished == False:
               unpaused.wait()
               print(arg+i)
               s.append(arg+i)
               i=i+1
               time.sleep(1)
           return s
       
   if __name__ == "__main__":
       x=func()
       event = multiprocessing.Event() # initially unset, so workers will be paused at first
       pool = multiprocessing.Pool(1, setup, (event,))
       result = pool.apply_async(x.myFunction, (10,))
       print('We unpause for 2 sec')
       event.set()   # unpause
       time.sleep(2)
       print('We pause for 2 sec')
       event.clear() # pause
       time.sleep(2)
       print('We unpause for 2 sec')
       event.set()   # unpause
       time.sleep(2)
       print('Now we try to terminate in 2 sec')
       time.sleep(2)
       x.stop()
       return_val = result.get()
       print('get worked with '+str(return_val))

Can someone point me in the right direction? As seen this wont stop with x.stop(). Global values also do not work.

Thanks in advance.

EDIT:

as suggested I tried to put the multiprocessing in a seperated object. Is this done by putting functions in a class like my example below?

import multiprocessing
import time

class func(object):    
   def __init__(self):
       self.event = multiprocessing.Event() # initially unset, so workers             will be paused at first
       self.pool = multiprocessing.Pool(1, self.setup, (self.event,))
       
   def setup(self):
       global unpaused
       unpaused = self.event

   def stop(self):
       self.finished = True
      
   def resume(self):
       self.event.set() # unpause
       
   def hold(self):
       self.event.clear() #pause

   def run(self, arg):
       self.pool.apply_async(self.myFunction, (arg,))
       
   def myFunction(self, arg):
       i = 0
       s=[]
       self.finished = False
       while self.finished == False:
          unpaused.wait()
          print(arg+i)
           s.append(arg+i)
           i=i+1
           time.sleep(1)
       return s
   
if __name__ == "__main__":
   x=func()
   result = x.run(10)   
   print('We unpause for 2 sec')
   x.resume()   # unpause
   time.sleep(2)
   print('We pause for 2 sec')
   x.hold() # pause
   time.sleep(2)
   print('We unpause for 2 sec')
   x.resume()   # unpause
   time.sleep(2)
   print('Now we try to terminate in 2 sec')
   time.sleep(2)
   x.stop()
   return_val = result.get()
   print('get worked with '+str(return_val))

I added a hold and resume function and put the setup function in a single class. But the lower example wont even run the function anymore. What a complex little problem. I am puzzled with this.

EDIT2: I tried a workaround with what i found so far. Big trouble came in while using the microprocessing.pool library. It is not straightforward using it with the USB connection... I produced a mediocre workaround below:

from multiprocessing.pool import ThreadPool
import time

class switch:
    state = 1
s1 = switch()

def myFunction(arg):
   i = 0

   while s1.state == 1 or s1.state == 2 or s1.state == 3:
       if s1.state == 1:
           print(arg+i)
           s.append(arg+i)
           i=i+1
           time.sleep(1)
       elif s1.state == 2:
           print('we entered snippet mode (state 2)')
           time.sleep(1)
           x = s
           return x
           pool.close()
           pool.join()
       elif s1.state == 3:
           while s1.state == 3:
               time.sleep(1) 
               print('holding (state 3)')
   return s


if __name__ == "__main__":
    global s
    s=[]
    
    print('we set the state in the class on top to ' +str(s1.state))
    pool = ThreadPool(processes=1)
    async_result = pool.apply_async(myFunction, (10,))
    print('in 5 sec we switch mode sir, buckle up')
    time.sleep(5)
    s1.state = 2
    print('we switched for a snippet which is')
    snippet = async_result.get()
    print(str(snippet[-1])+' this snippet comes from main')
    time.sleep(1)
    print('now we return to see the full list in the end')
    s1.state = 1
    async_result = pool.apply_async(myFunction, (10,))
    print('in 5 sec we hold it')
    time.sleep(5)
    s1.state = 3
    print('in 5 sec we exit')
    time.sleep(5)
    s1.state = 0
    return_val = async_result.get()
    print('Succsses if you see a list of numbers '+ str(return_val))

EDIT 3:

from multiprocessing.pool import ThreadPool
import time

class switch:
    state = 1
s1 = switch()

def myFunction(arg):
   i = 0

   while s1.state == 1 or s1.state == 2:
       if s1.state == 1:
           print(arg+i)
           s.append(arg+i)
           i=i+1
           time.sleep(1)
       elif s1.state == 2:
           print('we entered snippet mode (state 2)')
           time.sleep(1)
           x = s
           return x
           pool.close() #These are not relevant i guess.
           pool.join() #These are not relevant i guess.

   return s


if __name__ == "__main__":
    global s
    s=[]

    print('we set the state in the class on top to ' +str(s1.state))
    pool = ThreadPool(processes=1)
    async_result = pool.apply_async(myFunction, (10,))
    print('in 5 sec we switch mode sir, buckle up')
    time.sleep(5)
    s1.state = 2
    snippet = async_result.get()
    print(str(snippet[-1])+' this snippet comes from main')
    time.sleep(1)
    print('now we return to see the full list in the end')
    s1.state = 1
    async_result = pool.apply_async(myFunction, (10,))
    print('in 5 sec we exit')
    time.sleep(5)
    s1.state = 0
    return_val = async_result.get()
    print('Succsses if you see a list of numbers '+ str(return_val))

Well, this is what i have come up with... Not great not terrible. Maybe a bit more on the terrible side (:

I hate it that I have to recall the function pool.apply_async(myFunction, (10,)) after I grabbed a single piece of data. Currently only ThreadingPool works with no further code changes in my actual script!

  • 1
    You would have to use a seperate object to signal it to stop, you could just use a second event for example. But I don't understand why you wouldn't just want to put the extra process on the same event loop as the continuous one, which would solve the problem of preventing them running concurrently directly? – Paul Becotte Aug 03 '20 at 18:09
  • 2
    You could ditch the pool completely. Instead have 3 queues for the subprocess, one for regular commands, one for high priority commands and one for return data. In the subprocess, create a priority queue where the main thread will wait. Add 2 threads for the 2 receive queues, and have the thread pulling the high priority stuff make high priority messages for the priority queue. Now the main thread will always pull stuff from the priority side first. – tdelaney Aug 03 '20 at 18:32
  • @PaulBecotte When i run the continuous operation via while loop, the list (or array) gets bigger and bigger with data. The isolated operation would need some data of this dynamic list. The isolated operation would need the last added data at the very moment I call it. But this list (generated from the continuous operation) is not accesible live. – symbolinsight Aug 04 '20 at 08:53
  • @tdelaney Main thread has only isolated calls (like once a minute). Extra thread calls every 0.2 s for data. The main thread shall not wait at all. With queues it works like a stack where the bottom item gets performed until its done right?. So i assume this would not work here. – symbolinsight Aug 04 '20 at 08:57
  • Needing up-to-date data is even more reason to put the whole thing on one event loop- async and requiring the most recent data are not a good match. – Paul Becotte Aug 04 '20 at 18:01
  • @PaulBecotte I figured that none of my USB operation works with the sole multiprocessing library only (without more tweaking of course) with the ThreadPool feature. With ThreadPool the .Event() or Queue() stuff does not work like regular i figured. – symbolinsight Aug 05 '20 at 08:41
  • The reason this is working is because threads offer the ability to mutate shared state. Multiprocessing does not- if that was a process pool the "s1" inside MyFunc would be a copy, not the same object. So, this works... but shared mutable state is a recipe for pain and heartbreak- you should use a lock or a semaphore around that object to ensure the state isn't changing unexpectedly on you. – Paul Becotte Aug 06 '20 at 17:41

1 Answers1

1

in a situation where I need a process to run continuously, while occasionally doing other things, I like to use asyncio. This is a rough draft of how I would approach this

import asyncio


class MyObject:
    def __init__(self):
        self.mydatastructure = []
        self.finished = False
        self.loop = None
        
    async def main_loop(self):
        while not self.finished:
            new_result = self.get_data()
            self.mydatastructure.append(new_result)
            await asyncio.sleep(0)
            
    async def timed_loop(self):
        while not self.finished:
            await asyncio.sleep(2)
            self.dotimedtask(self.mydatastructure)
    
    async def run(self):
        await asyncio.gather(self.main_loop(), self.timed_loop())


asyncio.run(MyObject().run())

only one coroutine will be running at a time, with the timed one being scheduled once every 2 seconds. It would always get the data passed out of the most recent continuous execution. you could do things like keep a connection open on an object as well. Depending on your requirements (is it a 2 second interval, or once every other second no matter how long it takes) there are library packages to make the scheduling a bit more elegant.

Paul Becotte
  • 9,767
  • 3
  • 34
  • 42