To start with, there is no thing called "traditional floating point" unless you specify the exact tradition. Before IEEE754, there was ~60 different floating point implementations in hardware or popular libraries with fixed formats; you can see a nice survey, for example, here - and I'm not certain some implementations arenʼt lost. So, which one is "traditional"? ;) As @ChrisDodd pointed, IEEE754 can be now called "traditional" because it is in wide use since early 1980s and in nearly total use (except forced legacy) since 1990s. Well, a new programmer and user generation has been born, grown, studied at schools, universities and started working, who didn't use anything but IEEE754. (Disclaimer: I donʼt take into account the special formats used in machine learning, like 16-bit one which is 32-bit IEEE754 with cut-down mantissa width; and other non-general-computation applicatios.)
But, I would consider your question in a broader context. The concrete example 0.1+0.2-0.3 is not principal. We could consider 0.3+1.1-1.4, 0.01+0.06-0.07, or millions of other cases. I spent less than a minute to find these cases. On the other hand, 0.01+0.02-0.03 is zero using IEEE754 double. Why? Because this is how the stars have settled down. If you use another binary floating point implementation, it might behave opposite: 0.1+0.2-0.3==0, but 0.01+0.02-0.03!=0. Who knows?
The main thing is that, as other commenters already said, the issue is principally unavoidable with binary floating point. Any such implementation will have cases when limited mantissa length and rounding will spoil equalities which would be exact in decimal case. So, if your "traditional" is pre-computer era calculations using paper, abacus (any type, like wide versions), arithmometer - you are right, these implementations have no this kind of errors. If anything else - please specify what do you mean.
NB, well, IEEE754-2008 defines decimal implementations. Practically nobody uses them. I havenʼt seen an implementation in hardware except in IBM zSeries and pSeries. The problem is in domain. Typically, decimal fractions are used in financial accounting and taxation. One could use decimal floating point here, but really this domain nearly everywhere requires fixed point calculations. Using floating point there encounters risk of silent precision loss, which is unacceptable there. Well, IBM has its own peculiar of customer base, which isnʼt true for nearly all others.
So: start with determining your domain. If this is accounting/taxation, decimal fixed point is your best friend, and youʼll never face problems like 0.1+0.2!=0.3. Otherwise, most probably, youʼll be safe with IEEE754 binary, and the same issue wonʼt matter for you even if happens each time.