0

so I am working on an expression evaluator as an internal component on a work related project. but I am having some weird behavior when it comes to the output of floating point math...

the evaluator takes in a string

e.evaluate("99989.3 + 2346.4");
//should be 102335.7
//result is 102336

//this function is what returns the result as a string
template <class TYPE> std::string Str( const TYPE & t ) {
    //at this point t is equal to 102335.7
std::ostringstream os;

os << t;
    // at this point os.str() == 102336
return os.str();

it appears almost as if any floating point number above e+004 scientific notation is being rounded to the nearest whole number. can anyone explain why this is happening and how I might overcome this issue.

user3791372
  • 4,445
  • 6
  • 44
  • 78
asuppa
  • 571
  • 1
  • 11
  • 27
  • Might want to increase the `precision()`. – David G Aug 20 '14 at 15:13
  • @0x499602D2 I had tried to set precission to 5 decimals but it did not seem to help. also maybe you can clear this up, will that directly effect integer math, as this str function is template to handle all outputs. – asuppa Aug 20 '14 at 15:15

1 Answers1

1

You can set the precision with std::setprecision.

With a bit of help from std::fixed

Yochai Timmer
  • 48,127
  • 24
  • 147
  • 185