A CPU represents doubles in 8 bytes. Which is divided into 1 sign bit, 11 bits for the exponent ("the range") and 52 for the mantissa ("the precision").
You have limited range and precision.
The C constant DBL_DIG in <float.h>
tells you that such a double can only represent 15 digits precisely, not more. But this number entirely dependent on your c library and CPU.
330.1500249000119 contains 18 digits, so it will be rounded to 330.150024900012. 330.15002490001189 is only one off, which is good. Normally you should expect 1.189 vs 1.2.
For the exact mathematics behind try to read David Goldberg, “What Every Computer Scientist Should Know About Floating-point Arithmetic,” ACM Computing Surveys 23, 1 (1991-03), 5-48. This is worth reading if you are interested in the details, but it does require a background in computer science.
http://www.validlab.com/goldberg/paper.pdf
You can stop this from happening by using better floating point types, like long double or __float128, or using a better cpu, like a Sparc64 or s390 which use 41 digits (__float128) natively in HW as long double.
Yes, using an UltraSparc/Niagara or an IBM S390 is culture.
The usual answer is: use long double, dude. Which gives you two more bytes on Intel (18 digits) and several more an powerpc (31 digits), and 41 on sparc64/s390.