2

We have given a closed shape. The shape is given as a matrix of 0's and 1's. For an example see:

a shape example

We can think of this image as a coordinate system. For simplicity, let the mid point of the image be the origin that is x=0, y=0 point and the range of x, y coordinates is from -1 to 1.

Our aim is to find a polynomial p(x, y) of degree n such that the set of points satisfying the inequality p(x, y) <= 0 will approximate the given shape.

I have tried two approaches so far but I am not satisfied with the results.

First, I have tried to train a convolutional neural network. I have created 10000 or so random polynomials and created their corresponding shapes and used them as training data.

Second, I have chosen a random polynomial and greedly optimized its coefficients to minimize the number of non-overlapping pixels between the given shape and the shape created by the polynomial.

I am looking for an algorithm to solve this task. Thanks for any suggestions.

  • This problem is not a programming problem but a math one and should be posted on [math.stackexchange](https://math.stackexchange.com/). If the polynomial order is bigger than 5, then there is no general analytic formula and the problem need to be solved numerically. One way is to use optimization strategies (eg. in python you can use scipy for that). Training a neural network seems like a hammer to solve such a problem. – Jérôme Richard Aug 22 '21 at 17:16
  • 3b1b's video about [fourier series](https://www.youtube.com/watch?v=r6sGWTCMz2k) might be interesting here. Also Mathworlds's [Heart Curves](https://mathworld.wolfram.com/HeartCurve.html). – JohanC Aug 22 '21 at 17:55
  • @JohanC [Heart Curves](https://mathworld.wolfram.com/HeartCurve.html) contains interesting polynomials but the problem is finding a polynomial for *any* shape. So basically, in the end I will write a program taking an image file and it will output the coefficients of the approximating polynomials. And for the wonderful 3b1b video, unfortunely it is mostly irrelevant for this problem because we don't want trigonometric functions in our approximating equation, we only want polynomial terms like x^2*y^3 etc. – Metin Ersin Arıcan Aug 22 '21 at 18:05
  • 2
    By my estimate a two-dimensional polynomial that can incorporate the above shape, assuming it is x-symmetric, would require between 11 and 35 coefficients. This is a lot of computation for either an NN or a Monte Carlo estimator to get right. I think that you might do better with something similar to a simulated annealing approach. – RBarryYoung Aug 22 '21 at 19:52

1 Answers1

0

This is outside of my comfort zone, but if I had to solve this problem, I think I would try it like this:

An nth degree polynomial in (x,y) is a linear combination of (n+1)2 terms. Generally, then, we can set a required value at (n+1)2 points, and then find coefficients to satisfy those requirements.

If you have a good approximation to your required shape, then its outline (the contour where p(x,y)=0) will not be entirely inside or outside. It will weave in and out, and so there will be points where the outline crosses the desired outline.

The tendency is that, where neighboring crossing points are close together, the maximum error between those points is smaller, and this leads to the following procedure:

  1. Find a point in the 'middle' of your shape. Call it (cx,cy), and then set p(cx,cy)=1.
  2. Pick (n+1)2-1 points on your desired boundary. For each such (bx[i],by[i]), set p(b[x],b[y])=0. Here we explicitly determine the points where the approximate outline crosses the desired outline.
  3. Determine the polynomial that satisfies the requirements we set.
  4. Evaluate the maximum error between crossing points in, say, a clockwise direction, and then move each point a little bit clockwise or counter-clockwise so that it moves toward the maximum error. Each point should be moved a distance proportional to the difference in errors on either side.
  5. Go back to step 3 and iterate until the maximum error stops improving.

The idea here is that we move the boundary crossing points closer together where the error is largest, in order to make that large error small.

This is supposed to work kind of like the Remez exchange algorithm for approximating functions, which works very well, but I'm not sure that the changes I made for your specific use case will hold up. Worth a try, I think.

Matt Timmermans
  • 53,709
  • 3
  • 46
  • 87