3

Using Chebyshev polynomials, we can compute sin(2*Pi/n) exactly using the CGAL and CORE library, like the following piece of codes:

#include <CGAL/CORE_Expr.h>
#include <CGAL/Polynomial.h>
#include <CGAL/number_utils.h>
//return sin(theta) and cos(theta) for theta = 2pi/n
static std::pair<AA, AA> sin_cos(unsigned short n) {
    // We actually use -x instead of x since root_of will give the k-th
    // smallest root but we want the second largest one without counting.
    Polynomial x(CGAL::shift(Polynomial(-1), 1));
    Polynomial twox(2*x);
    Polynomial a(1), b(x);
    for (unsigned short i = 2; i <= n; ++i) {
        Polynomial c = twox*b - a;
        a = b;
        b = c;
     }
     a = b - 1;
     AA cos = -CGAL::root_of(2, a.begin(), a.end());
     AA sin = CGAL::sqrt(AA(1) - cos*cos);
     return std::make_pair(sin, cos);
}

But if I want to compute sin(2*m*Pi/n) exactly, where m and n are integers, what is the formula of the polynomial that I should use? Thanks.

Weisheng
  • 31
  • 2
  • 1
    I don't think floating point hardware can ever compute anything involving an irrational number exactly... – twalberg Apr 15 '14 at 17:59
  • Depends on the definition of "exactly", but you miss a point: CORE expressions are based on arbitrary precision interval arithmetic combined with an expression tree, which stores the "definition" of the number based on rational inputs. This allows arbitrary refinement of at least algebraic numbers, and it is indeed even possible to compare two such numbers (in particular, decide whether they are equal). – akobel Apr 15 '14 at 19:29
  • @perpeduumimmobile: So what is sought is not *computation*, but *simplification* in some algebraic sense? – Ben Voigt Apr 17 '14 at 06:06
  • @BenVoigt: The representation of an algebraic real x boils down to a defining polynomial f with f(x) = 0 (usually a minimal polynomial or at least a square-free polynomial) and an interval I=(a,b) such that x is the one and only root of f in I. In the expression tree, those can be stacked (e.g., f could have algebraic coefficients, not just rational ones), but this increases the complexity. Hence, we want to find an "easy" algebraic description of the polynomial and compute the isolating interval. If you want to save computations (like not going over resultants), it's best to find a simple f. – akobel Apr 17 '14 at 09:48

1 Answers1

2

(Partial solution.)

This is essentially computing the real and imaginary part of the roots of unity as algebraic numbers. Let's denote w(m) = exp(2*pi*I*m/n). Then, w(m) itself is a complex root of En(x) = x^n-1.

You need to find a defining polynomial of Re(w(m)). Resultants are a tool to find such a polynomial: 2*Re(w(m)) is a root of Res (En(x-y), En(y); y).
For an explanation why this is the case: Note that 2*Re(w(m)) = w(m) + conj(w(m)), and that the complex roots of En come in conjugate pairs; hence, also conj(w(m)) is a root of En. Now loosely speaking, the En(y) part "constrains" y to be any (complex) root of En, and combining this with the first argument allows x to take any complex value such that x-y is a root of En as well. Hence, a possible assignment is y = conj(w(m)) and x-y = w(m), hence x = w(m)+conj(w(m)) = 2*Re(w(m)).
CGAL can compute resultants of multivariate polynomials, so you can compute this resultant, and you simply have to pick the correct real root. (The largest one will obviously be w(0) = 1, the smallest one is 2*Re(w(floor(n/2))).)

Unfortunately, the resultant has a high complexity (degree n^2), and resultant computation will not be the fastest operation you've ever seen. Also, you'll pay for dense polynomials although your instances are very sparse and structured. YMMV; I have no clue about your use case, and if you need higher degrees.
However, I did a few tests in a computer algebra system, and I found that the resultant splits into factors of more reasonable size, and that all its real roots actually belong to a much simpler polynomial of degree floor(n/2)+1 only. (No proof, just an observation.)
I don't know of a direct formula to write down this factor, and I don't want to speculate about it. But maybe some people at mathoverflow or math.stackexchange can help?

EDIT: Here is a guess for at least a recursive formula.
I write s(n,x) for the significant factor of the resultant polynomial containing all real roots but 0. This means that s(n,x) has all values 2*Re(w(m)) for m != n/4, 3*n/4 as roots.

s(0,x) = 0
s(1,x) = x - 2
s(2,x) = x^2 - 4
s(3,x) = x^2 - x - 2
s(4,x) = x^2 - 4
s(5,x) = x^3 - x^2 - 3*x + 2
s(6,x) = x^4 - 5*x^2 + 4
s(7,x) = x^4 - x^3 - 4*x^2 + 3*x + 2
s(8,x) = x^4 - 6*x^2 + 8

s(n,x) = (x^2-2)*s(n-4,x) - s(n-8,x)

Waiting for a proof...

akobel
  • 198
  • 1
  • 8
  • Possible hint: can one exploit the addition theorem for sine and cosine? – akobel Apr 15 '14 at 18:42
  • Also see http://en.wikipedia.org/wiki/Chebyshev_polynomials#Roots_and_extrema ... It looks like I reinvented Chebyshev polynomials without a proof... – akobel Apr 15 '14 at 19:14