1

I have a system with a small number of particles (4-10) at fixed locations in space. I then have a single target location. I would like to assign weights to each particle so that the weighted average of the particle locations is as close as possible to the target. The weights need to be assigned consistently in cases where multiple solutions are possible. For example, if I have 3 particles at [1,0,0] and at [-1,0,0] and [0,0,0] and my target is [0,0,0], there are three possible solutions, which would be weights of 0.333,0.333,0.333 or 0,0,1 or 0.5,0.5,0. The second option seems most intuitive but really it does not matter which solution is chosen as long as it is chosen consistently. Also, I am mostly interested in cases where an exact solution is not possible, but the chosen weights minimize the error. What is the most efficient algorithm to compute these weights?

EDIT: to make this more clear I have created a visual of the 2d case. In this example, there are 5 fixed positions and 1 target position. Currently I am using a clunky naive approach of starting with the average of all 5 (weights = 0.2,0.2,0.2,0.2,0.2) and then iteratively adjusting these weights and seeing if it helps the solution, gradually "walking" towards the target. This can take hundreds of steps. I need to process this on millions or even billions of target positions, so I am looking for a more direct analytical approach to the solution. enter image description here

billTavis
  • 19
  • 3
  • 1
    Maybe I did not understand the problem completely, but it just seems to be a set of linear equations you have to solve. – Henry Jul 25 '20 at 17:09
  • @Henry yes that is the goal, to get a solution in terms of equations so that I can solve for the answer quickly and accurately. But I do not know how to state this in terms of solvable equations. Any search ideas would be much appreciated. I have searched for "least squares" but all I can find is linear regression which does not seem to be the same thing, because for that each term has its own error but in this case there is only one error value, for the overall average. – billTavis Jul 26 '20 at 22:24

1 Answers1

1

Simple linear equations can be set up for the problem (showing the 3D case):

sum(xi * wi) = xt
sum(yi * wi) = yt
sum(zi * wi) = zt

You did not mention it, but it seems there is also a constraint that the weights sum up to 1. In this case just add a further equation:

sum(wi) = 1

If there are N points, we now have a system of 3 (or 4) equations with N unknowns (the wi). How to solve such a system is well known from linear algebra. There may be 0, 1, or infinitely many solutions.

In case there is no solution, you can instead solve for the normal projection of the target point onto the sub-space (which may be a plane, line, or point) spanned by the given points.

If you additionally want the weights to be greater or equal to 0, it gets a bit more interesting. My feeling is that an exact solution will always be possible if the target point is in the convex hull of the given points.

Henry
  • 42,982
  • 7
  • 68
  • 84
  • thanks for the detailed answer. After doing some research it seems that I can put this in terms of Ax = b, where A is a 3xN matrix with the point positions, b is the target position, and x is the weights to be solved for. Because A is not square I can solve it with x = (A.T*A)^-1 * A.T*b correct? For N = 10 that would require hundreds of calculations for every target position, so although this would give me a direct analytical solution I'm not sure how efficient it would be in practice... maybe you're onto something with the convex hull, and a geometric approach would be better – billTavis Jul 27 '20 at 16:37
  • 1
    Instead of calculating inverses, just do Gaussian elimination to bring the matrix to echelon form. Also note, that you have to do that only once when you solve the same system for different right hand sides. The homogeneous solution is also the same and has to be calculated just once. So I don't think that the calculations will take long. – Henry Jul 28 '20 at 06:15
  • thank you again. I'm not familiar with that method but looking on wikipedia it seems that the right hand side changes with every step, so I'm not sure what you mean by saying I only have to do it once. https://en.wikipedia.org/wiki/Gaussian_elimination Are you suggesting that I track the operations which occur on the right hand values? And then to solve with a new set of right hand values, I apply those stored operations and then use back substitution with the already modified matrix? How does the homogeneous solution come into play? – billTavis Jul 28 '20 at 21:30
  • 1
    Yes to the first part. The elimination steps need to be repeated for the new right hand side. The homogeneous solution is useful in the case where there are infinitely many solutions. You get them all by adding one special solution with all homogeneous solutions. – Henry Jul 29 '20 at 06:24