0

I have a dataset where every row receives a % allocation to one of three categories. I would like to add an "Unclassified" category, which simply accounts for the leftover % that is not allocated between the three provided categories.

Category A, B, and C are all datatype float.

SELECT

[Category A],
[Category B],
[Category C],
[Category A]+[Category B]+[Category C] [CatTotal],
1-([Category A]+[Category B]+[Category C]) [Unclassified]

FROM Dataset

Yields

<table>
  <tr>
    <th>Category A</th>
    <th>Category B</th>
    <th>Category C</th>
    <th>Category Total</th>
    <th>Unclassified (1-Category Total)</th>
  </tr>
  <tr>
    <td align = 'center'>0</td>
    <td align = 'center'>0.333333333333333</td>
    <td align = 'center'>0.666666666666667</td>
    <td align = 'center'>1</td>
    <td align = 'center'>2.22044604925031E-16</td>
  </tr>
</table>

What's strange is that if I recreate the exact values of Category A,B, and C in a temp table, also stored as float, then run the same code I get 0 as expected. What is happening here?

  • 1
    Maybe this help http://stackoverflow.com/questions/8414561/sql-server-float-data-type-calculation-vs-decimal – Juan Carlos Oropeza Jan 20 '17 at 20:36
  • I closed this as duplicate... To be honest, there is quite a count of quesitons on this... The linked question links to a comprehensive article. Read this for further insight... About your last sentence. What you see is just the *human readable representation* of a binary value. It depends on the context how this happens. In many cases some implicit roundings are done to present a "more pretty" number... – Shnugo Jan 20 '17 at 21:00

0 Answers0