-1

I am quite new to UART programming and trying to understand the concept of parity bit which is still not totally clear for me.

From what I understand so far :

Let's say I have 8 bits to transmit from UART deviceA to UART deviceB. Each time I want to send a byte to deviceB, then there is a start bit sent, then the 8 bits, then the parity bit and then the stop bit. OK, this is clear. Now, when deviceA is set to work with an odd parity, then the parity bit is set to 0 if the number of 1 in the byte is odd. And it's the opposite if the deviceA is set to even parity. OK, I understand that too.

Now, when deviceB receives the frame, it checks for the byte sent, that the parity bit is coherent with the number of 1 in the byte and there is a parity error if not. But this deviceB, has also a parity mode.

So my question is :

Should deviceA and deviceB be set to the same parity mode (even or odd) to make this control work as expected or am I wrong ?

Thanks for any help at clarifying this point.

too honest for this site
  • 12,050
  • 4
  • 30
  • 52
Fryz
  • 2,119
  • 2
  • 25
  • 45
  • 2
    Yes they both need to use the same parity. The parity mode is not something that is transmitted through the serial interface. – Some programmer dude Jun 20 '17 at 08:58
  • 1
    No, you are right.... But the sentence should be _DeviceA and deviceB **must be** set to the same parity mode_... BTW How can be different?... – LPs Jun 20 '17 at 08:58
  • 2
    The baud rate and word format must be the same for both the transmitter and the receiver. – Weather Vane Jun 20 '17 at 08:58
  • @LPs : i actually need to force parity bit directly by accessing the UART register ( master side implementation ) in order to respect the communication protocol between devices – Fryz Jun 20 '17 at 09:10
  • @Scab UART hardware typically does not provide such functionality. You cannot implement your protocol over UART hardware. Where did this requirement/protocol come from? Can you not just reject it and insist on a sane protocol? – ThingyWotsit Jun 20 '17 at 09:13
  • 2
    I'm voting to close this question as off-topic because that's a hardware question, it is not a software problem. – too honest for this site Jun 20 '17 at 09:15
  • 1
    @ThingyWotsit: If the UART supports 9 bit mode or only 7 data bits are used and it supports 8 bit mode (once was quite common), one can generate the parity bit manually. Both are supported by a lot of UARTs. – too honest for this site Jun 20 '17 at 09:19
  • As a sidenote: Symbol-parity is not really a golod way to ensure integrity ofd communication. Spare the extra bits and use an appropriate CRC or hash. Better: use proper authentication/signature and encryption - if reasonable. – too honest for this site Jun 20 '17 at 09:22
  • I believe this is a software problem also, as i need to understand this clearly in order to maintain the master side of the MDB protocol. In this protocol, the parity bit is used as a mode bit and , we rely on the parity errors to know if this mode bit is set or not. The parity bit, is here not used as usual, for data corruption we sympy use a checksum at the end of the data transmission – Fryz Jun 20 '17 at 09:22
  • "n this protocol, the parity bit is used as a mode bit and , we rely on the parity errors to know if this mode bit is set or not" - so you don't want a parity bit at all, but an extra bit for transmission. Get an UART/MCU with sufficient bits and write your software. Btw: This is a very bad idea and plain nonsense. That way the parity will not do any good, you cannot use DMA on MCUs supporting this (because they will not transfer the parity bit and do no DMA on symbol errors). Seriously: broken by design! (sound like some 1970(80ies hackish protocol like X.25). – too honest for this site Jun 20 '17 at 09:32
  • If that's what you mean https://en.wikipedia.org/wiki/Multidrop_bus#MDB_in_vending_machines I'm correct another rubbish protocol which causes trouble on modern hardware. – too honest for this site Jun 20 '17 at 09:35
  • @Olaf : yes it's what i meant and i totally agree with you but i have to deal with an existing code that communicate with devices using this protocol, so there is no way for me to change this. – Fryz Jun 20 '17 at 09:35
  • @Olaf 'only 7 data bits are used and it supports 8 bit mode' sure - that's easy with all the UARTs I've ever used, unfortunately, OP: 'Let's say I have 8 bits to transmit from UART deviceA to UART deviceB', ie. 8 bits used. 'If the UART supports 9 bit mode' - only seen that with the ninth parity bit auto-generated by the UART hardware. – ThingyWotsit Jun 20 '17 at 11:41
  • ...however, 'This is a very bad idea and plain nonsense', I could not agree more. How do such protocols come into being in the first place? Were all the designers/developers in the 70's on tequila or weed? – ThingyWotsit Jun 20 '17 at 11:43
  • The only way I could see to do this on all readily-available hardware is to use '8-bit plus parity mode' and a special driver that reconfigures the UART hardware parity for every character. Needless to say, I would only do that for $$$$ or at gunpoint, and even then I would want a brown paper bag over my head to prevent identification. – ThingyWotsit Jun 20 '17 at 11:48
  • @ThingyWotsit: A lot of UARTs support a 9 data-bit mode. Sometimes called "multidrop/address bit or as 8 bits + parity. Some protocols use this bit to differntiate between an address/start of frame flag. Most UART supporting this also support a wakeup event/interrupt when this bit is set (ignoring all data which has not). See STM32 family (likely their 8-bitters, too), HC11, S08, MSP430, etc. – too honest for this site Jun 20 '17 at 11:59
  • @ThingyWotsit: See the link above, As I wrote: some 1970ies/80ies rubbish protocol. That's the problem if pure hardware-engineers design communication protocols whithout really knowing about software design. It was acceptable these times, but such rubbish is still being used and newly developed. – too honest for this site Jun 20 '17 at 12:03
  • @ThingyWotsit : You're right.8 bits are used and the ninth parity bit ( not optional in the MDB protocol ) is used as a mode bit and set with some LCR register bits of the UART that can force the parity bit to 0 or 1 for all transferred/received data. However, there is no need to configure it for each single byte in my case. – Fryz Jun 20 '17 at 12:21
  • This question is off-topic because it's not within the scope of questions appropriate for this site, as defined in [What topics can I ask about here?](http://stackoverflow.com/help/on-topic) Please also see: [What types of questions should I avoid asking?](http://stackoverflow.com/help/dont-ask) You may be able to get help on [another Stack Exchange site](http://stackexchange.com/sites#name), for example [electronics.se]. Specifically, this is an electronics question, not a programming question. – Makyen Jul 25 '17 at 23:30
  • This question is off-topic because it is not within the scope of questions appropriate for this site, as defined in [What topics can I ask about here?](//stackoverflow.com/help/on-topic) Please also see: [What types of questions should I avoid asking?](//stackoverflow.com/help/dont-ask) You may be able to get help on [another Stack Exchange site](//stackexchange.com/sites#name). However, be sure to read each site's on-topic page prior to posting. – Makyen Nov 09 '17 at 05:06

1 Answers1

-1

You have understood the concept of parity quite clearly. It is a way to eliminate error that can occur during transmission of bit sequence.

So both device should know that which parity is in use, so that number of 1 should remain same while sending and receiving. If you define different parity for different machine then both will understand bit meaning differently.
Parity Description

With this table you can understand, that how parity bit changes with odd and even parity for same number. Hope this Helps :)

Vishwajeet Vishu
  • 492
  • 3
  • 16
  • i said it is a *way* to eliminate. i have not said that it removes error. and parity bit is surely used for error detection and sending the signal back again – Vishwajeet Vishu Jun 20 '17 at 09:24
  • 2
    @Maybe we have different dictionaries defining "eliminate". YOu should post your definition. Also parity does not send anything back (which does not make any sense actually). You migh have the correct idea, but your woring is very confusing. – too honest for this site Jun 20 '17 at 09:27
  • @Olaf you are understand me wrong, if an parity error is detected then signal will be send one more time. – Vishwajeet Vishu Jun 20 '17 at 09:30
  • No, it will not. 1) A UART as such does not send anything in resonse, except for handshake. 2) This depends on the higher level protocol. There is no automatism, nor a requirement, nor do most UART-based protocols work that way. 3) As a sidenote: Symbol parity is not really useful in UART communication. It is an anachronism. – too honest for this site Jun 20 '17 at 09:39
  • yes indeed it is not used for most UART communication, but when it will be used, why anyone wants to enable it. Just for detecting and handling error in bit sequence. and if you detect bit error. then will you not eliminate them. whether it need some other implementation to resend the bit sequence – Vishwajeet Vishu Jun 20 '17 at 09:52
  • Your chart is misleading. The UART transmits the value low-ordered bit first. The parity bit is transmitted after the most-significant bit (i.e. the parity bit is more "significant" than the MSB). So the parity bit should be placed at the left side of the bit value, since the convention is (left to right) most-significant to least-significant. – sawdust Jun 20 '17 at 22:41
  • *"why parity bit is used."* -- Back in the day of slow baudrates (e.g. 110 baud) a noise spike had highest probability of causing just a single bit error rather than a double or more bit errors. So detection of single-bit errors in characters was deemed sufficient. – sawdust Jun 20 '17 at 22:49
  • 1
    @sawdust: Actually it was the best they could get these days. PArity originates from hardware-only serial communication with special devices. There was no CPU/MCU used for a long time. Until the single-chip microprocessor popped up in the 70ies, that was just as efficient. With the slow bitrate it was _acceptable_. But with even 9600b/s it became problematic, at higher bitrates it definitively is a no-go since more than two decades even on 8 bit MCUs. It is less effective in terms of reliability **and** overhead compared to e.g. a simple CRC16, too. – too honest for this site Jun 24 '17 at 02:48
  • @Olaf -- No need for the history lesson. I've been in the industry long enough to have used a real TTY and paper tape. – sawdust Jun 24 '17 at 07:38