probably wrong place to ask but I will try.
I have to design a circuit that would add/subtract floating point
I tried to do it using signed magnitude numbers in IEE 754 standard.
They are quite large so I decided to start with something smaller just to prove the concept.
I found a few algorithms on the net for performing addition and substraction of positive numbers.
Most look like this:
http://meseec.ce.rit.edu/eecc250-winter99/250-1-27-2000.pdf
.
They do not explain what happens with the sign bit.
Now I'm very confused. According to what I've found on the net there is no difference in performing:
A-B and A- (-B)
could someone help me with a link where the algorithm is explained in detail?
thanks for all answers
I've found this algebraic explanation useful http://howardhuang.us/teaching/cs231/08-Subtraction.pdf
Currently my circuit performs A+B (disregarding sign bit) and A-B just like kfmfe04 wrote. I'm XORing B's input and adding 1 so I getting the result in 2C.
The second pdf suggests including the sign bit in add/sub operation. I will try this in the morning.
Having spent so many hours exercising my brain I feel a bit tired and can't think straight. Now I just wonder if I should change my circuit so that:
The toggle add/sub button still XORs the B [a+(-b)] but also before this part I XORs the mantissas' with their sign to convert them into 2c.
This way I could cover the case of negative numbers subtraction (-A)-(-B).
Sounds to complicated though.