1

I'm programming on a STM32 board and I'm confused on how to use my peripherals : polling, interrupt, DMA, DMA interrupt...

Actually, I coded an UART module which send basics data and it works in polling, interrupt and DMA mode.

But I'd like to be able to send and receive specific frames with variable lengths, for example:


[ START | LGTH | CMD_ID | DATA(LGTH) | CRC ]

I also have sensors and I'd like to interact received DATA in these UART frames with sensors.

So, what I don't understand is:

  • how to program the UART module to work in "frame" mode? (buffer? circular DMA? interrupt? where, when..)

  • when I'm able to send or receive frame with my UART, what is the best way to interact with sensors? (inside a timer interrupt? in a state machine ? with extern variable? ...)

Here is my Libraries tree

In future, the idea is to carry this application in freertos

Thank you!

guillaume
  • 31
  • 1
  • 2
  • 3
  • If you have DMA, then the only reason why you would ever use interrupts is when you have hard real-time requirements. Otherwise, avoid them. A circular DMA buffer probably makes most sense for received serial data. You just have to ensure that you read it often enough so that it can never overflow. – Lundin Apr 12 '17 at 12:30
  • 1
    "what is the best way to interact with sensors?" is a very broad question that probably can't be answered. "Best" leads to opinion-based answers. – Lundin Apr 12 '17 at 12:31
  • UARTs don't work in "frames." They only send characters of some number of data bits, surrounded by a start bit and stop bit. It sounds like you have some higher-level protocol that you'd like to use over the UART link, which is just fine. You just break your frames down into the individual characters that constitute them and send them one by one. On the other end, receive them one by one and decode the packet. Your UART driver can be interrupt-driven, pollling, and/or DMA-driven, depending on your application's needs. – Jason R Apr 12 '17 at 12:31
  • @Lundin: It's pretty common to still use interrupts in concert with DMA, for instance so you know when a DMA finishes so you can reload its source/destination address registers as needed. – Jason R Apr 12 '17 at 12:33
  • This is a massive question; I don't know where to start. Mostly it depends completely on your protocol. Is it a standard, like Modbus, or one you've designed yourself? Are the frames fixed length (it appears not)? Is it peer-to-peer, or master and slave or ACK/NAK? Whatever the protocol, don't implement it in the UART driver: that's for controlling the hardware (except that some old protocols, like Modbus, need layer 2 & layer 3 all muddled up). – Evil Dog Pie Apr 12 '17 at 12:34
  • @JasonR Yes. The interrupt-handler can just set a semaphore and request a scheduler run, so setting a thread ready to process the received DMA buffer. – ThingyWotsit Apr 12 '17 at 12:45
  • @JasonR That would reduce the interrupt frequency compared to plain UART rx interrupts, but it is still an interrupt, which should be avoided if possible - particularly asynchronous interrupts. If you can set aside enough RAM for the DMA buffer, interrupts shouldn't be necessary. Based on UART baudrate, it is easy to calculate how often the buffer needs to be read to handle the worst case - continuous reception. It's just a matter of allocating enough RAM. – Lundin Apr 12 '17 at 12:48
  • 1
    @Lundin: I just wouldn't agree with a blanket statement to "avoid interrupts." They're used all the time, for good reasons, even in non-hard-real-time systems. – Jason R Apr 12 '17 at 12:49
  • Avoiding interrupts means no preemptive multitasker. That pretty much is the end of good I/O performance overall. Still, I suppose, in a uController, it may not matter much.... – ThingyWotsit Apr 12 '17 at 12:55
  • @JasonR They are also implemented incorrectly in, I would guess, 80% of the cases. Programmers screw up flag clearing, or they screw up semaphores, they get tricked by compiler optimizations (no volatile on shared variables), they don't consider max stack usage, they don't consider worst-case interrupt latency and overall real-time performance. Etc etc. – Lundin Apr 12 '17 at 12:56
  • @ThingyWotsit If you have a RTOS, then you have hard real-time requirements, and you'll obviously have to (let the RTOS) use interrupts. – Lundin Apr 12 '17 at 12:56
  • @Lundin: We'll have to agree to disagree on this one. Yes, there are some subtleties to using interrupts correctly, but you can say that about almost any tool to do a job. Also, usage of an RTOS does not imply hard-real-time requirements. RTOSes are commonly used when the services that they provide (e.g. multitasking, synchronization, possibly things like file/network I/O) are useful for the application writer, but not necessarily in service of hard real-time requirements. – Jason R Apr 12 '17 at 12:58
  • Just FYI, I can't remember the last time when I wrote a program without interrupts in it. It is often a necessary evil that should be used with caution and as a last-time resort. – Lundin Apr 12 '17 at 13:02

2 Answers2

1

Absolutelly in DMA when it is available.

You have one big (good solution is cyclic) buffer and you just write data from one side. If DMA does not already work, you start the DMA with your buffer.

If DMA works, you just write your data to buffer and you wait DMA transfer complete interrupt.

Later in this interrupt you increase read pointer of buffer (as you sent some data already) and check if any data available to send over DMA. Set memory address to DMA and number of bytes in buffer to send.

Again, when DMA TC IRQ happens, do process again.

There is no support for FRAME, but only in plain bytes. It means you have to "invent" your own frame protocol and use it in app.

Later, when you want to send that FRAME over UART, you have to:

  • Write start byte to buffer
  • Write other header bytes
  • Write actual data
  • Write stop bytes/CRC/whatever
  • Check if DMA does not work, if it does not, start it.

Normally, I use this frame concept:


[START, ADDRESS, CMD, LEN, DATA, CRC, STOP]

  • START: Start byte indicating start of frame
  • ADDRESS: Address of device when multiple devices are in use on bus
  • CMD: Command ID
  • LEN: 2 bytes for data length
  • DATA: Actual data in bytes of variable length
  • CRC: 2 bytes for CRC including: address, cmd, len, data
  • STOP: Stop byte indicating end of frame

This is how I do it in every project where necessary. This does not use CPU to send data, just sets DMA and starts transmission.

From app perspective, you just have to create send_send(data, len) function which will create frame and put it to buffer for transmission.

Buffer size must be big enough to fit your requirements:

  • How much data at particular time (is it continues or a lot of data at small time)
  • UART baudrate

For specific question, ask and maybe I can provide some code examples from my libraries as reference.

unalignedmemoryaccess
  • 7,246
  • 2
  • 25
  • 40
  • This doesn't look like a robust serial protocol. 1) `START` & `STOP` are going to be a pain unless DATA is restricted so that it _cannot ever_ contain either, as interrupted frames will cause erroneous resynchronisation mid-data. 2) `CMD` is application-specific and should be in a higher layer, not the frame. 3) You're going to need a bit more than just `ADDRESS` in a multi-drop bus implementation, but these will usually have some hardware impact, such as time slots or handshaking (like RTS/CTS on RS232). 4) Even `CRC` may be a bit too much, unless you have a noisy, error-prone connection. – Evil Dog Pie Apr 12 '17 at 12:57
  • Thanks for your comment. I'm happy to give you answers. 1) With START I know when protocol starts, so that I know when to start my state machine. For `CMD`part I use is it read/write/broadcast 2) With STOP I just check if STOP is there. Even if STOP is inside data, you still have data length info. I use this in single master mode, in multi master address is splitted in `sender` and `receiver`. CRC is used to check everything between START and STOP. For parsing packet, there is state machine and this concept is so far very robust, even in very noisy (EMC) environment as well as in program flow. – unalignedmemoryaccess Apr 12 '17 at 13:01
  • It works for your system so it can't be all bad. :-) The explanation of each component will be useful to the OP. Plus, the difference in our opinions highlights that the choice of protocol is strongly dependent on the system requirements and operating environment. – Evil Dog Pie Apr 12 '17 at 13:19
  • I totally agree with you about protocol choice. If you want to make ti for everyone, then as you mentioned, add CMD part inside DATA and parse it by APP yourself. – unalignedmemoryaccess Apr 12 '17 at 13:21
  • @MikeofSST CRC is absolutely necessary when using UART or RS-232. Though of course there's not many reasons to use RS-232 nowadays - PC:s don't have it any longer. So if stuck with UART-based interfaces, you'd use RS-422/485. – Lundin Apr 12 '17 at 13:34
  • @Lundin Some kind of checksum, or error detection, sure, but not necessarily CRC. If you do go for a CRC, which polynomial do you use? Getting the right one is tricky unless you do a rigorous data and fault analysis. Then, if you find that you do need a CRC over a checksum, you'd probably consider using something more robust at the hardware layer, or extending your protocol to use error correction instead of just detection. CRC always strikes me as being not quite right for most jobs, but pretty easy to find example code to make it work. – Evil Dog Pie Apr 12 '17 at 13:58
  • In most cases, it is enough to detect "packet" error and then request retransmit by yourself, just like done in ethernet or somewhere else. If you don't want CRC; you can just XOR your data bytes or something. Just to know if valid or not. I use CRC16 or when I work with STM32 MCU, CRC32 is best fit (the same as in ethernet). – unalignedmemoryaccess Apr 12 '17 at 14:00
  • @MikeofSST CRC-16-CCITT 0x1021 should be perfectly fine in the general case and there are MCUs with on-chip hardware support for it. Of course it all depends on the amount of data getting moved. It is rather that _UART_ (and friends) is the wrong tool for most tasks... for high data integrity you'd use CAN and for high speed you'd use ethernet. – Lundin Apr 12 '17 at 14:25
  • The reason why you shouldn't use "byte sum", "xor", "parity" or some other home-brewn amateur solution, is because they have a very poor probability of catching single bit and double bit errors, whereas any CRC catches all/most of those. – Lundin Apr 12 '17 at 14:32
  • @Ludin XOR has probability 1 of catching single bit errors! LRC8 (byte sum) has similar performance. If neither of these is sufficient, you need to understand the source of the errors and then, either pick an appropriate CRC, or change the layer 1 (as you say, CAN and other physical layers have better error correction & detection built in. [This](https://users.ece.cmu.edu/~koopman/pubs/KoopmanCRCWebinar9May2012.pdf) has a good analysis of some simple error detection mechanisms. – Evil Dog Pie Apr 12 '17 at 14:50
  • @tilz0R Thank you for your answer. What I finally did is : USART DMA on Rx with a fix length and I added padding. So, when my buffer is full, I have an interrupt and I "set" the boolean new_frame = TRUE. And in my state_machine (inside my while(1) I always "get" the value of new_frame, and if it's TRUE, I check the SOF, the CRC etc to give it to my state machine. Is it a good way? – guillaume Apr 14 '17 at 07:59
0

In this case, where you need to implement that protocol, I would probably use plain interrupts and, in the handler, use a byte-by-byte state-machine to parse the incoming bytes into a frame buffer.

Only when a complete, valid frame has been received is it necessary to signal some semaphore/event and requuest a scheduler run, otherwise, you can handle any protocol error as you require - maybe tx some 'error-repeat' message and reset the state-machine to await the nexx start-of-frame buyte.

If you use DMA for this, then the variable frame-length is going to be awkward and you STILL have to iterate the received data to validate you protocol:(

DMA doesn't sound like a good fit for this, to me...

EDIT: if no preemptive multitasker, then forget about all that semaphore gunge above:) Still, it's easier to check a boolean 'validFrameRx' flag than parse DMA block data.

ThingyWotsit
  • 366
  • 2
  • 4
  • 1
    `DMA doesn't sound like a good fit for this, to me...` DMA is the best fit for this and it sounds like Beethoven's music. – unalignedmemoryaccess Apr 12 '17 at 13:02
  • To write an interrupt-based ring buffer on byte-per-byte basis, when you have DMA available, should be made criminal. What if your CPU has other things to do than to continuously serving the interrupt? Suppose the UART is using a high baudrate... you are creating a tight coupling between the UART baudrate and _everything else_ in your program. Someone changes baudrate and suddenly some completely unrelated part of your program will crash and burn. – Lundin Apr 12 '17 at 13:06
  • STM32 UARTs can detect an **idle** line, generating an interrupt when no start bit arrives within the time needed to transmit 10 bits. So it's perfectly possible to receive variable length frames with DMA, without polling and checking something all the time. – followed Monica to Codidact Apr 12 '17 at 13:19
  • @berendi this is really perfect for it. You have DMA TC or UART IDLE interrupts to detect received data for processing. I use this concept when I use DMA for UART RX. – unalignedmemoryaccess Apr 12 '17 at 13:36
  • @Lundin My experience with DMA and a ring buffer for UART traffic did reveal a significant higher interrupt latency/handling with DMA vs a simple ring buffer. So for high bandwidth, DMA is certainly preferable, Yet for low bandwidth, critical timed UART data flow (it was a keyboard in my case where a life critical application needed to respond within a sub-fraction of a second), the ring buffer performed better. Use the implementation that best fits the need. – chux - Reinstate Monica Apr 12 '17 at 15:33
  • DMA is `always` better as you don't waste CPU IRQ. Ok, maybe if you receive 1 byte per second, then is it probably better RXNE IRQ on UART. But in general, DMA is better and when it finishes with transmission or IDLE detected, set new memory for DMA and start it. Then copy your received data to (in most cases) ring buffer to use it later up upper layer of application. – unalignedmemoryaccess Apr 13 '17 at 06:23
  • @chux If you use DMA it should be possible to design the system so that you never need any interrupt at all (except maybe for UART errors like overrun, framing etc). If the DMA is configured to whine with interrupts, you lose most of the benefits. But I take it that with "interrupt latency" you mean general response time. Yes, that will be some 100us slower or whatever. If that is too slow, then you have hard real-time requirements and then need to use UART interrupts. – Lundin Apr 13 '17 at 08:42