0

I have modified PyModbus Async Asyncio Client Example and created a program (see below) to copy coils and registers from one PLC to another. There is a main thread and a thread for each node(PLC). The modified code works reasonable well. But it turned out that the code runs considerable slower if I configured two pairs of source / destination nodes instead of one pair and copying the same amount of MODBUS packets.

Copying 140 frames from N1 to N2 is way faster (using three thread: main, N1, N2) than copying 70 frames from N1 to N2 and another 70 frames from N3 to N4 (using five threads main, N1, N2, N3, N4)

I expected the configuration with two pairs running faster. What I should change or why my expectation is incorrect? Thanks!

    def run_with_already_running_loop(self):

    UTILS.LOGGER.info("Running Async client with asyncio loop already started")
    UTILS.LOGGER.info("------------------------------------------------------")

    def done(future):
        UTILS.LOGGER.info("Done !!!")

    def start_loop(loop):
        """
        Start Loop
        :param loop:
        :return:
        """
        asyncio.set_event_loop(loop)
        loop.run_forever()

    while True:
        for hostdef in self.M_HostList:
            if hostdef.host == None or hostdef.host.client.protocol == None:
                try:
                    if hostdef.host == None:
                        host = COPYHOST()
                        host.loop = asyncio.new_event_loop()
                        host.t = Thread(target=start_loop, args=[host.loop])
                        host.t.daemon = True
                        # Start the loop
                        host.t.start()
                    assert host.loop.is_running()
                    asyncio.set_event_loop(host.loop)
                    host.loop, host.client = ModbusClient(schedulers.ASYNC_IO, host=hostdef.IP, port=hostdef.port, loop=host.loop)
                    hostdef.host = host
                    host.future = asyncio.run_coroutine_threadsafe(self.start_async_test(hostdef, hostdef.host.client.protocol, hostdef.job, hostdef.time), loop=host.loop)
                    host.future.add_done_callback(done)
                    UTILS.LOGGER.info("Made host on {}".format(hostdef.key))
                except:
                    UTILS.LOGGER.info("Failed to make host on {}".format(hostdef.key))
                    pass
            try:
                self.manage_jobs(hostdef)
                # UTILS.LOGGER.info("@@@ {}".format(hostdef.key))
            except:
                pass    
        time.sleep(0.05)



async def start_async_test(self, hostdef, client, job, job_start_time):

    last_milli_time = 0

    while True:

        if client == None:
            await asyncio.sleep(1)
            continue

        current_milli_time = UTILS.UTILS.time_ms() 

        if job == None:
            if (current_milli_time-last_milli_time) > 1000:
                #UTILS.LOGGER.info("!!! {}".format(hostdef.key))
                last_milli_time = current_milli_time
        else:
            #UTILS.LOGGER.info("!!! {} {}".format(hostdef.key, job.state))
            pass



        if job != None and job.state == 3 and hostdef.oqueue.qsize() == 0:
            assert job_start_time != 0 
            job.state = 4
            #UTILS.LOGGER.info("FINISHING JOB: {}".format(job.SD.key))
            fjob = deepcopy(job)
            hostdef.oqueue.put(fjob)
            job = None
            job_start_time = 0
        if job == None and hostdef.iqueue.qsize() != 0:
            job = hostdef.iqueue.get()
            job.state = 1
            job_start_time = current_milli_time
            #UTILS.LOGGER.info("START JOB: {}".format(job.SD.key))


        if job != None and job.dir == 'D' and job.state == 1:
            # in case of destination we write

            job.state = 2

            if job.SD.type == '%M':
                rq = await client.write_coils(job.SD.start, job.buffer, unit=UNIT)
                job.state = 3
                if rq.function_code < 0x80:
                    job.Fault = False
                else:
                    job.Fault = True
                    assert False
                pass
            elif job.SD.type == '%MW':
                rq = await client.write_registers(job.SD.start, job.buffer, unit=UNIT)
                job.state = 3
                if rq.function_code < 0x80:
                    job.Fault = False
                else:
                    job.Fault = True
                    assert False
                pass
            else:
                assert False

        elif job != None and job.dir == 'S' and job.state == 1:
            # in case of source we read

            job.state = 2

            if job.SD.type == '%M':
                rr = await client.read_coils(job.SD.start, job.SD.size, unit=UNIT)
                job.state = 3
                if rr.function_code < 0x80:
                    job.Fault = False
                    job.buffer = rr.bits 
                else:
                    job.Fault = True
                    job.buffer = None
                    assert False
                pass
            elif job.SD.type == '%MW':
                rr = await client.read_holding_registers(job.SD.start, job.SD.size, unit=UNIT)
                job.state = 3
                if rr.function_code < 0x80:
                    job.Fault = False
                    job.buffer = rr.registers
                else:
                    job.Fault = True
                    job.buffer = None
                    assert False
            else:
                assert False

            await asyncio.sleep(0.01)
George V
  • 1
  • 1
  • Hello George, I think I might be able to help but I don't completely understand your code. Can you explain why are you using a single `ModbusClient` instance? If I understood you are implementing a kind of forwarder in between two PLCs, mirroring registers from one to the other. Are you opening and closing the connection for every transaction? That would explain the overhead you are observing. – Marcos G. Jun 16 '20 at 07:00
  • Thank you Marcos, The code opens connections to all hosts involved in copy operations. These connections are not closed ever. (That much so that the copy stops if I disconnect and re-connect the PLC). – George V Jun 16 '20 at 21:02
  • You're welcome George. That makes sense. But in the code you posted there is only one instance of `ModbusClient`; Have you posted all your code? Maybe you can also mention what PLCs you are working with. In my experience, most old PLCs only allow for a certain number of Modbus TCP connections and they handle all of them with the same process with low priority. If that's the case I'm afraid there is nothing you can do about it. You might want to create your own virtual Modbus servers in the same computer you are running the client to see if the problem is in your code or elsewhere. – Marcos G. Jun 17 '20 at 06:04
  • Hello Marcos, host.loop, host.client = ModbusClient(schedulers.ASYNC_IO, host=hostdef.IP, port=hostdef.port, loop=host.loop) generates multiple clients in a cycle. I tested the code with a MODBUS simulator I wrote earlier. The program has not seen real PLC yet. – George V Jun 17 '20 at 10:00
  • Yes, thank you for explaining. I had seen that line but what I meant is that, as it is, your code is not testable. Are you running all simulated server and clients in the same machine? A PLC is (or should be if it's a good one) nothing like a PC with regards to computing and network performance. If what you are doing is just a theoretical exercise I'm afraid it might be quite futile. What I mean is: why would you expect two threads to be faster? In my opinion, your question has nothing to do with Modbus in general or pymodbus in particular. It has more to do with TCP/IP performance. – Marcos G. Jun 17 '20 at 10:40
  • There are many discussions about this topic, by the way. See, for instance [this one](https://stackoverflow.com/questions/9651570/whats-faster-sending-multiple-small-messages-or-less-longer-messages-with-tcp-so). You'll have to calculate the overhead of the headers and everything but it's likely that the buffering on the network stack is delaying things long enough for you to observe what you described... – Marcos G. Jun 17 '20 at 10:46
  • If you are curious enough to jump into the rabbit hole I'd suggest you look into [Wireshark](https://www.wireshark.org/) if you are not familiar with it. There is a ton of things happening in the magic background of TCP. You can even check what happens when you have everything running on the same computer, but as I said it might be quite futile; better wait until you have your PLCs up and running – Marcos G. Jun 17 '20 at 10:54
  • Thank you Marcos. Right now everything (copier and simulators) are running in the same laptop. Thank you for the suggestions I will try to use multiple computers (and ports) and indeed WireShark may be useful. What I expected is that I can improve the performance by adding additional threads to the copy process. This is not a theoretical exercise, we will use the code in a project, and the performance of copy is important. Later on today I will do some testing using real PLC-s. – George V Jun 19 '20 at 00:31
  • You're welcome George. That's a very interesting topic indeed. Once you move to real PLCs, the overheads will be completely different. What SO are you planning to use for your forwarder? If timing is critical I'm not sure Windows would be up to the task. It sounds a bit strange that you need this kind of thing at all, in general you would just replicate from one PLC to the other directly. – Marcos G. Jun 19 '20 at 05:31

0 Answers0