3

I have a problem due to different UART driver behaviour porting an application from an old ARM-based system to a new one. These are Linux embedded systems, one an Atmel AT91 with kernel 2.6.14 and the other a Freescale iMX6 with 3.14.38. My application is written in c.

The old one seems to have 10 receive buffers of 50 bytes each (I saw this in the kernel source), while the new one seems to be at least 1 of 4096 (deducted from testing).

This means that when I'm attempting to read form the port, in the old system I would need to wait 50 times a character time before I get some data, while on the new one I may potentially have to wait 4096 characters time before I get any data. This is due to the DMA operation in the UART driver. The driver won't pass any data on until either the buffer is full or the end of the transmission has been detected.

This wouldn't be a problem if I new that I would get a response every single time (i.e. the transmission takes what it takes depending on the amount of data), but when acting as a master and communicating with slaves on a bus you may be making requests to devices that are not there, so you wouldn't get a response. In this scenario my timeout configurations need to be very different, with the old system giving me a much faster response to a timeout.

I'll illustrate this with an example. I have a function reading from the port until there is no more data. "read" blocks until there is any data or a timeout time has passed.

If the transmission is 2048 bytes:

Old system the function reads: 50 bytes 20 times and then 48 bytes. New system the function reads: 2048 bytes.

  • At 9600 baud 50 bytes take 52 milliseconds
  • At 9600 baud 2048 bytes take 2.13 seconds
  • At 9600 baud 4096 bytes take 4.26 seconds

Because I don't know the length of the reply I'm going to get I have to go for the worse case scenario and assume it can be > 4096 bytes:

In my old system I could have configured the port to timeout at ConfiguredTimeoutTime + 52 milliseconds In the new one I'll have to set it to ConfiguredTimeoutTime + 4260 milliseconds

The ConfiguredTimeoutTime is a grace time I give the slaves to receive my request, process it and create a response. It would depend on the types of devices on the bus.

This is a considerable difference, meaning that each non-responding slave is introducing a nearly 4 seconds delay in my polling time.

The questions are:

  • Is there anything I can do from my code to get results similar to those in my old system?
  • Is there anything I can ask the provider of the new system to change when building the kernel?
  • Am I completely missing the point and there is a much better way of doing this on both systems?

Sorry for the length of the post, I couldn't see how to condense it any further! Thanks!

sawdust
  • 16,103
  • 3
  • 40
  • 50
Dan
  • 179
  • 1
  • 11
  • 1
    Since you don't specify the exact UART driver it's hard to exactly diagnose. My first suggestion would be seeing if there is some kind of flush buffer operation. If there was you could at the old timeout rate flush the buffer and if it was empty simply assume the device timed out. http://stackoverflow.com/questions/13013387/clearing-the-serial-ports-buffer I liked the flush operation for some random kernel C code but without knowing your specific driver I'm not sure if these are what you need. – arduic Jul 18 '16 at 12:33
  • @arduic: Wouldn't I lose the data if I flush the port? – Dan Jul 18 '16 at 13:32
  • Yes it would. I was offering a solution for the initial check if the device was there, where I assumed the data wasn't super critical. If you have control of what the slaves are transmitting you can configure the tty device to identify a certain character as EOF forcing it to send the buffer to the program. The flag is called VEOF http://linux.die.net/man/3/tcflush if you need help on using that let me know. If you cannot change what the slaves transmit then I'll try to find an alternative. – arduic Jul 18 '16 at 13:46
  • @arduic: I see what you are saying, but unfortunately devices can drop as they fail, or we could have intermittent faults. Also, no, I don't have any control over the slaves. – Dan Jul 18 '16 at 13:51
  • (Sorry my @ flag is not working) Last 2 ideas I have. 1. I believe I found how to change the buffer size for you. Under IOCTL there is "TIOCSSERIAL" which you can find examples for here. http://www.home.unix-ag.org/simon/files/serial-linux.c The struct it uses for settings is called struct serial_struct and the definition for it can be found here. http://lxr.free-electrons.com/source/include/uapi/linux/serial.h#L18 2. A much dirtier option, you could set the baud to 9600*8. Then read each byte as a bit. This would fill your buffer 8* as fast but would require some really unclean code. – arduic Jul 18 '16 at 14:12
  • To be clear, the struct has a field called "int xmit_fifo_size;" Which I BELIEVE sets the buffer size. It's linux kernel code so god forbid we put comments to describe what that field does. – arduic Jul 18 '16 at 14:16
  • @arduic: Thanks for the effort. I believe it would take a bit more than just changing the size of the buffer, but it may yet come to down to reimplementing the driver with the 10 * 50 buffers, or 20 * 50 to take advantage of the extra resources of the new system. Still, let's see if anyone can suggest any other way. – Dan Jul 18 '16 at 14:37
  • @arduic *"A much dirtier option, you could set the baud to 9600*8. Then read each byte as a bit"* -- You mean try to read each bit as a "byte". But the UART is incapable of that because of framing requirements, which you seem to be completely unaware of. – sawdust Jul 18 '16 at 21:41
  • @sawdust I honestly assumed there was some issue with such a concept but I couldn't for the life of me come up with it at the time. I totally forgot about the parity/stop bits for some reason. My bad there. – arduic Jul 19 '16 at 00:23

1 Answers1

1

Try to set read timeout to 100milli seconds as shown below to achieve freedom from different buffer sizes. When read return it will either have data or not.

currentconfig.c_cc[VTIME] = 1;
currentconfig.c_cc[VMIN] = 0;

Try some easy experiments from this serial port library. The applications folder contains many read method designs for different scenarios.

samuel05051980
  • 369
  • 2
  • 2
  • Setting VTIME to 1 (100 ms) makes no difference. The data is not available to be read until the driver has made it available, as per my post. – Dan Sep 01 '16 at 09:26
  • The library is for Java, which is not one of my skills. Also I don't think I have Java available for these platforms to try anything. Thanks anyway. – Dan Sep 01 '16 at 09:28