3

I have a program that is structured something like this:

main():
    LoopingCall(f1).start(1)
    LoopingCall(f2).start(10)
    reactor.run()

f1 has some deferreds that deal with some blocking operations. Something like this:

f1():
    deferred = some_blocking_operation()
    deferred.addCallback(do_something_with_blocking_operartion_result)
    deffered.addErrback(do_something_in_case_of_error)

f2 does some logging. It has no twisted code in it.

I would expect f2 to log every 10 seconds, regardless of what happens in f1.

However, this is not the case. f2 is called with very wild intervals, from 50 seconds to 2 seconds apart.

Why is this happening?

EDIT 1:

By request, the f2 code:

f2():
    logging.info("Last log: " + str(time.time() - self.last_log_time))
    self.last_log_time = time.time()

f2 is a method of an object however this object is a helper, it doesn't actually perform operation. The object f2 is part of is more of a data holder.

Edit 2

So, the logging time (f2) is directly related to the time required by f1. If f1 requires 30 seconds, from start to finish, the time between logs (f2) will be 30 seconds as well. Shouldn't f2 be called while f1 is waiting for some_blocking_operation()?

Addendum to Edit 2:

The problem is clearly a result of f1, as the looping call for f2 work properly ( logs every 10 seconds ) if I comment the looping call to f1.

I will add a more detailed sketch of f1 to better illustrate what I'm doing.

EDIT 3:

This is a more detailed sketch of what's going on in f1.

class c1():
    def __init__(self):
        helper_object = c2() # f2 is part of c2, it is a helper function in a helper object

    # I timed the read time, it's in the miliseconds, something like 1.5436354746e for example.
    def read_from_pipe(self):
        data = ""
        try:
            while True:
                data += self.input_pipe.read(1)
        except IOError:
            pass
        return data

    # I timed this as well, it's also in the order of miliseconds.
    def write_to_pipe(to_write):
        self.output_pipe.write(to_write)

    def success(self, result):
        self.write_to_pipe("Good")

    def fail(reason):
        self.write_to_pipe("Bad")

    def process_item(self, item):
        deferred = some_blocking_operation_with_item(item)
        deferred.addCallback(do_another_blocking_operation) # This appear to be the longest in terms of time, however, it's still 1 or 2 seconds, not 80 (longest time between f2 calls so far was 83)
        deferred.addCallback(success)
        deffered.addErrback(fail)

    def schedule_input_for_processing(self, input):
        for item in input: # There are no more than 50 items at a time
            process_item(item)

    def f1(self):
        input = self.read_from_pipe() # read is non blocking - I do something like this
        self.schedule_input_for_processing(input)

The workflow is something like this:

  1. Get input from the pipe ( read is non blocking, it shouldn't affect f2 )
  2. Say the input is a list. Do something with each element of the list.
  3. The "something" done to each element of the list is a blocking operation. However, it's a rather short one, usually no longer than 0.5 to 1 seconds.
  4. Upon completion of that operation, if no error occurred, fire the second callback, that does another blocking operation. This takes 1 to 2 seconds usually.
  5. Write to output pipe

Each individual blocking operation takes a little time. I am starting to speculate it's a compound effect. Every input has 50 items, so if each item needs 1 to 2 seconds to finish, the logging times I get would start to make sense.

However, my question remains, shouldn't f2() fire while the items in input are performing blocking operations?

Edit 4

Ok, so I reverted to a version that I know worked. The callback used to fire every 10 seconds. Now, it doesn't. This is really strange. I think the size of the items in input might have something to do with this.

RandomGuyqwert
  • 425
  • 6
  • 18

0 Answers0