1

I am trying to minimize a highly non-linear function by optimizing three unknown parameters a, b, and c0. I'm attempting to replicate some governing equations of a casino roulette ball in Python 3.

Here is the link to the research paper: http://www.dewtronics.com/tutorials/roulette/documents/Roulette_Physik.pdf

I will be referencing equations (35) and (40) in the paper. Basically, I take stopwatch lap measurements of the roulette ball spinning on the wheel. For each successive lap, the lap time will increase because of losses of momentum to non-conservative forces of friction. Then I take these time measurements and fit equation (35) using a Levenberg-Marquardt least squares method in equation (40).

My question is twofold: (1) I'm using the scipy.optimize.least_squares() method='lm', and I'm not sure how to write the objective function! Right now I have the function written exactly as is in the paper:

def fall_time(k,a,b,c0):

    F = (1 / (a * b)) * (c0 - np.arcsinh(c0) * np.exp(a * k * 2 * np.pi))

    return F

def parameter_estimation_function(x0,tk):

    a = x0[0]
    b = x0[1]
    c0 = x0[2]

    S = 0

    for i,t in enumerate(tk):

        k = i + 1

        S += (t - fall_time(k,a,b,c0))**2

    return [S,1,1]

sol = least_squares(parameter_estimation_function,[0.1,0.8,-0.1],args=([tk1]),method='lm',jac='2-point',max_nfev=2000)

print(sol)

Now, in the documentation examples, I never saw the objective function written the way I have it. In the documentation, the objective function is always returns the residual, not the square of the residual. Additionally, in the documentation they never use the sum! So I'm wondering if the sum and the square are automatically handled under the hood of least_squares()?

(2) Perhaps my second question is a result of my failure to understand how to write the objective function. But anyhow, I'm having trouble getting the algorithm to converge on the minimum. I know this is because the levenberg alogrithm is "greedy" and stops near the closest minima, but I figured that I would be able to at least converge on about the same result given different initial guesses. With slight alterations in the initial guess, I'm getting parameter results with different signs. Additionally, I've yet to find a combination of initial guesses that allows the algo to converge! It always times out before it finds the solution. I've even increased the amount of function evaluations to 10,000 to see if it would. To no avail!

Perhaps somebody could shed some light on my mistakes here! I'm still relatively new to python and the scipy library!

Here is some sample data for tk that I've measured myself from the video here: https://www.youtube.com/watch?v=0Zj_9ypBnzg

tk = [0.52,1.28,2.04,3.17,4.53,6.22]
tk1 = [0.51,1.4,2.09,3,4.42,6.17]
tk2 = [0.63,1.35,2.19,3.02,4.57,6.29]
tk3 = [0.63,1.39,2.23,3.28,4.70,6.32]
tk4 = [0.57,1.4,2.1,3.06,4.53,6.17]

Thanks

ddm-j
  • 403
  • 1
  • 3
  • 18
  • Have you tried to provide the Jacobian of the problem as a callable, aka a function, instead of using a numeric approximation to it ? – Hilton Fernandes Jul 10 '23 at 01:10

1 Answers1

3

1) Yes, as you suspected the sum and the square of the residuals are automatically handled.

2) Hard to say, since I'm not deeply familiar with the problem (e.g., how many local minima exist, what constitutes a 'reasonable' result, etc.). I may investigate more later.

But for kicks I fiddled with some of the values to see what would happen. For example, you can just replace the 1/b constant with a standalone variable b_inv, and this seemed to stabilize the results quite a bit. Here's the code I used to check results. (Note that I rewrote the objective function for brevity. It simply leverages the element-wise operations of numpy arrays, without changing the overall result.)

import numpy as np
from scipy.optimize import least_squares

def fall_time(k,a,b_inv,c0):
    return (b_inv / a) * (c0 - np.arcsinh(c0) * np.exp(a * k * 2 * np.pi))

def parameter_estimation_function(x,tk):
    return np.asarray(tk) - fall_time(k=np.arange(1,len(tk)+1), a=x[0],b_inv=x[1],c0=x[2])

tk_samples = [
    [0.52,1.28,2.04,3.17,4.53,6.22],
    [0.51,1.4,2.09,3,4.42,6.17],
    [0.63,1.35,2.19,3.02,4.57,6.29],
    [0.63,1.39,2.23,3.28,4.70,6.32],
    [0.57,1.4,2.1,3.06,4.53,6.17]
    ]

for i in range(len(tk_samples)):
    sol = least_squares(parameter_estimation_function,[0.1,1.25,-0.1],
        args=(tk_samples[i],),method='lm',jac='2-point',max_nfev=2000)
    print(sol.x)

with console output:

[ 0.03621789  0.64201913 -0.12072879]
[  3.59319972e-02   1.17129458e+01  -6.53358716e-03]
[  3.55516005e-02   1.48491493e+01  -5.31098257e-03]
[  3.18068316e-02   1.11828091e+01  -7.75329834e-03]
[  3.43920725e-02   1.25160378e+01  -6.36307506e-03]
CrepeGoat
  • 2,315
  • 20
  • 24
  • Fwiw, `least_squares( ... method='trf', ftol=1e-4, verbose=2 )` prints a line at each iteration. (Unfortunately `method='lm'` has no `verbose`.) – denis Sep 14 '18 at 10:19
  • @CrepeGoat apologies but finding it diff to wrap my head around this - can i clarify that `least_square` will somehow minimize SSR by passing different values after being provided with initial values and will optimize the first parameter (in this case,a list of parameters) to `parameter_estimation_function`? if i have other parameters that I need to pass to `parameter_estimation_function` that do not require optimization but depends on the current value of `i`, how can i pass those parameters in to `parameter_estimation_function`? thank you! – AiRiFiEd Feb 23 '19 at 01:05
  • 1
    @AiRiFiEd it seems like you’re understanding it correctly. (To be clear, you only need the ‘i’ if you’re optimizing several times with different parameters; if you only need to run one optimization, then you don’t need the loop.) To pass in a parameter that depends on ‘i’ (pardon the ticks, on my tablet), you do something similar to the ‘tk_samples[i]’ in the above example: add a parameter to your objective function ‘parameter_estimation_function’ (in this case, this was ‘tk’), and add the value to the ‘least_squares’ function call in the ‘args’ tuple. Does that help? – CrepeGoat Feb 23 '19 at 01:55
  • I feel like I merely repeated what you just said, I apologize. If that wasn’t helpful, can you rephrase what you’re hung up on? – CrepeGoat Feb 23 '19 at 01:59
  • @CrepeGoat thanks for the prompt reply and sorry for being unclear! I can't post my question on a separate post as it would be a duplicate...appreciate your help here. Assuming for each set of `tk_samples` there is another set of values, say `[1,2]` for `[0.52,1.28,2.04,3.17,4.53,6.22]`, and `[1,2]` are inputs to `fall_time` -`fall_time(k,a,b_inv,c0, new_param1, new_param2)`, but I only wish to optimize with respect to `k`, `a`, `b_inv`, and `c0`, how should i pass `[1,2]` into `least_squares`? – AiRiFiEd Feb 23 '19 at 03:46
  • @AiRiFiEd tbh I would probably just post it as a new question anyway, linking to this one for convenience. Your problem is 95% similar to this one, but your question is about the extra 5% difference. If it gets flagged as a duplicate, that's fine right? Just my opinion. – CrepeGoat Feb 23 '19 at 14:45
  • 1
    @AiRiFiEd anyway. so for each optimization problem, you have a `tk_samples` list and two other parameters like `p1, p2`? I would replace the `tk_samples` list of lists, with an `opt_args` list of tuples, where each tuple has the parameters needed for each individual optimization. E.g., `opt_args = [([0.52,1.28,2.04,3.17,4.53,6.22], 1, 2), ...]`, and then later on `sol = least_squares(..., args=opt_args[i], ...)`. (Note that the ellipses just signify that other stuff is there that I don't want to type out explicitly.) Let me know if that works for you. – CrepeGoat Feb 23 '19 at 14:49
  • @CrepeGoat apologies i could only implement the code at work - I have some other bugs but I believe your reply solves my original problem. Thanks a lot for your help! – AiRiFiEd Feb 26 '19 at 01:07
  • @AiRiFiEd glad to have helped :) – CrepeGoat Feb 26 '19 at 02:03