7

Reading Here be dragons: advances in problems you didn’t even know you had I've noticed that they compare the new algorithm with the one used in glibc's printf:

Grisu3 is about 5 times faster than the algorithm used by printf in GNU libc

But at the same time I've failed to find any format specifier for printf which would automatically find the best number of decimal places to print. All I tried either have some strange defaults like 6 digits after decimal point for %f or 2 after point for %g or 6 after point for %e.

How do I actually make use of that algorithm implementation in glibc, mentioned in the article? Is there really any such implementation in glibc and is it even discussed in any way by the Standard?

Adriano Repetti
  • 65,416
  • 20
  • 137
  • 208
Ruslan
  • 18,162
  • 8
  • 67
  • 136
  • Although plain C doesn't have this facility, C++ does have it since C++17: [`std::to_chars`](https://en.cppreference.com/w/cpp/utility/to_chars) function for floating-point types without format parameter generates the shortest representation (supported in GCC≥11, MSVC≥15.9). – Ruslan May 10 '21 at 13:48

2 Answers2

5

This is the actual article. The blog post is referring to the results in section 7 (in other words, “they” are not comparing anything in the blog post, “they” are regurgitating the information from the actual article, omitting crucial details):

table of results with information

Implementations of Dragon4 or Grisu3 can be found in implementations of modern programming languages that specify this “minimal number of decimal digits” fashion (I recommend you avoid calling it “perfect”). Java uses this type of conversion to decimal in some contexts, as does Ruby. C is not one of the languages that specify “minimal number of decimal digits” conversion to decimal, so there is no reason for a compiler or for a libc to provide an implementation for Dragon4 or Grisu3.

Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
-1

There is no such thing as "best number of decimal places" because floating point numbers are not stored as decimal numbers. So you need to define what you mean by "best". If you want to print the numbers without any possible loss of information C11 gives you the format specifier %a (except for non-normalized floating point numbers where the behavior is unspecified).

The defaults from the C11 standard are 6 digits for %f and %e and for %g it is:

Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X: — if P > X ≥ −4, the conversion is with style f (or F) and precision P − (X + 1). — otherwise, the conversion is with style e (or E) and precision P − 1.

If you want to use that algorithm, implement your own function for it. Or hope that glibc have implemented it in the past 5 years. Or just rethink if the performance of printing floating point numbers is really a problem you have.

Art
  • 19,807
  • 1
  • 34
  • 60
  • 5
    The reference to Dragon4 already defines what is meant by "best". Namely, 1) The original value can be recovered from the output by rounding 2) The output is the shortest possible 3) The output is correctly rounded. See [_How to Print Floating-Point Numbers Accurately_](https://lists.nongnu.org/archive/html/gcl-devel/2012-10/pdfkieTlklRzN.pdf) for more information. – Ruslan Jun 03 '15 at 11:11
  • "best" criterions as exposed by @Ruslan are perfectly justified in some context, for example if these numbers are going to be used in read-eval-print loops. IMO it's a must for any language that wants to support such REPL. – aka.nice Jun 03 '15 at 16:31