I'm working on control system stuff, where the NIC interrupt is used as a trigger. This works very well for cycle times greater than 3 microseconds. Now i want to make some performance tests and measure the transmission time, respectively the shortest time between two interrupts.
The sender is sending 60 byte packages as fast as possible. The receiver should generate one interrupt per package. I'm testing with 256 packets, the size of the Rx descriptor ring. The packet data won't handled during the test. Only the interrupt is interesting.
The trouble is, that the reception is very fast up to less then 1 microsecond between two interrupts, but only for around 70 interrupts / descriptors. Then the NIC sets the RDU (Rx Descriptor Unavailable) bit and stops the receiving before reaching the end of the ring. The confusing thing is, when i increase the size of the Rx descriptor ring up to 2048 (e.g.), then the number of interrupts is increasing too (around 800). I don't understand this behavior. I thought he should stop again after 70 interrupts.
It seems to be a time problem, but why? I'm overlooking something, but what? Can somebody help my?
Thanks in advance!