4

I have a financial application that deals with "percentages" quite frequently. We are using "decimal" types to avoid rounding errors. My question is:

If I have a quantity representing 76%, is it better to store it as 0.76 or 76?

Furthermore, what are the advantages and/or disadvantages from a code maintainability point of view? Are there conventions for this?

jdg
  • 2,230
  • 2
  • 20
  • 18
  • I think it depends on the language and/or platform. Unless you state what those are, it's going to be difficult to answer your question. – Mark Seemann Mar 28 '14 at 20:08
  • 1
    This question is more suitable for http://programmers.stackexchange.com/ – Selcuk Mar 28 '14 at 20:08
  • @MarkSeemann Here is some context: we are programming a web-application in python/django. The web-application uses MySQL as a database. That said, I don't see why this really depends much on the language or platform... – jdg Mar 28 '14 at 20:15
  • @Selcuk that does seem like a better place to post this, sorry :( Is there a way to move it over? – jdg Mar 28 '14 at 20:17
  • @jdg Well, if you were programming in JavaScript, for example, IIRC, there's only a single numeric data type, so there would be no difference between `0.76` and `76`. – Mark Seemann Mar 28 '14 at 20:19
  • @MarkSeemann What do you mean there is no difference? 76 !== 0.76. I guess as mentioned, I am not worried about rounding errors. Its more of a conceptual question: does it make more "sense" to store numbers as a fraction or *100. My guess is the former, as multiplying by 100 is really a display issue. – jdg Mar 28 '14 at 20:21
  • Sorry, I meant that there would be no difference from a memory usage perspective. `0.76` and `76` would take up the same about of bytes of memory... Obviously, I'm not implying that `0.76` === `76` :) – Mark Seemann Mar 28 '14 at 20:23
  • @MarkSeemann Oh yeah of course, btw sorry since I probably came of as a jerk with that comment. But do you see the dilemma now? Its a bigger picture type question; as Selcuk mentioned, I should have put it on the programmers site. – jdg Mar 28 '14 at 20:25

1 Answers1

5

If percentage is only a small part of the problem domain, I would probably stick with a primitive number.

PerCent literally means per hundred, and is a decimal number; e.g. 76 % is equal to 0.76. Thus, in the absence of units of measure support in the programming language itself, I would represent a percentage as a decimal number.

This also means that you don't need to perform special arithmetic in order to calculate with percentages, but you will need to multiply by 100 if you need to display the number in percent.

If percentage is a very central part of the problem domain, you should consider discarding Primitive Obsession and instead introduce a proper Value Object.

Still, even if you introduce a Percent Value Object that contains conversions to and from primitive numbers, you're still stuck with potential programmer errors, because in itself, the number 0.9 could both mean 90 % or 0.9 % depending on how you choose to interpret it.

In the end, my best advice is to cover your code base with appropriate unit tests, so that you lock the conversion code down.

Mark Seemann
  • 225,310
  • 48
  • 427
  • 736