0

I have a Linux machine with two PCIe RS-485 cards (XR17V354 & XR17V352). I have one port on one of the cards hardwired to one port on the other card. These cards are driven by the generic serial driver (serial8250).

I am running a test and measuring latency. I have one Linux process sending two bytes out the port and then listens for two incoming bytes. The other process receives two bytes and immediately sends two bytes back.

I'm measuring this round trip latency to be around 1500 microseconds with a standard deviation of about 40 microseconds. I am trying to understand the source of this latency. Specifically, I'd like to understand the difference in time from which a hard IRQ fires to signal data is ready to read and the time that the bytes are made available to the user space process.

I am aware of the ftrace feature, but I am not sure how best to utilize it, or if there are other, more suitable tools. Thanks.

bsirang
  • 473
  • 4
  • 14

1 Answers1

0

What kind of driver is this? I assume it's a driver in kernel space and not UIO. Independent of your issue you could start looking at how long it takes from a hardware interrupt to the kernel driver and from there to user space.

Here[1] is some ancient test case which can be hacked a bit so you can compare interrupt latencies with "standard" Linux, preempt-rt patched Linux and maybe something like Xenomai as well (although the Xenomai solution would require that you rewrite your driver).

You might want to have a look at [2], cyclictest and friends and maybe try to drill with perf into your system to see more details system wide.

Last but not least have a look at LTTng[3] which enables you to instrument code and it already has many instrumentation points.

[1] http://www.denx.de/wiki/DULG/AN2008_03_Xenomai_gpioirqbench

[2] http://cgit.openembedded.org/openembedded-core/tree/meta/recipes-rt/rt-tests/

[3] http://lttng.org/

robert.berger
  • 13,211
  • 1
  • 16
  • 6