0

I am trying to minimize a function of a vector of length 20, but I want to constrain the solution to be monotonic, i.e.

x[1] <= x[2]... <= x[20]

I have tried to implement this in the following way using "constraints" for this routine:

cons = tuple([{'type':'ineq', 'fun': lambda x: x[i]- x[i-1]} for i in range(1, len(node_vals))])

res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons) #optimize

However, the results I get are not monotonic, even when the initial guess b is, it seems that the optimizer is completely ignoring the constraints. What could be going wrong? I have also tried changing the constraint to x[i]**3 - x[i+1]**3 to make it "smoother", but it didn't help at all. My objective function, localisation is the integral of solution to an eigenvalue problem whose parameters are defined beforehand:

def localisation(node_vals, domain): #calculate localisation for solutions with piecewise linear grading

        f = piecewise(node_vals, domain) #create piecewise linear function using given values at nodes
        #plt.plot(domain, f(domain))
        
        M = diff_matrix(f(domain)) #differentiation matrix created from piecewise linear function
        m = np.concatenate(([0], get_solutions(M)[1][:, 0], [0]))
        
        integral = num_int(domain, m)
        
        return integral
Lili FN
  • 101

2 Answers2

0

You didn’t post a minimum reproducible example that we can run. However, did you try to specify which optimization algorithm to use in SciPy? Something like this:

res = sp.optimize.minimize(localisation, b, args=(d), constraints = cons, method=‘SLSQP’)
Infinity77
  • 1,317
  • 10
  • 17
0

I'm having a very similar problem but with additional upper and lower bounds on the monotonicity property. I'm tackling the problem like this (maybe it helps you):

  1. Using the Trust-Region Constrained Algorithm given by scipy. This provides us a way of dealing with linear constraints in a matrix-manner:
  lb <= A.dot(x) <= ub

where lb & and ub are the lower (upper) bounds of this constraint problem and A is the matrix, representing the linear constraint problem.

  1. every row of matrix A is a linear term which defines a constraint

  2. If, for example, x[0] <= x[1], then this can be transformed into x[0] - x[1] <= 0 which in terms of the linear constraint matrix A looks like this [1, -1,...], provided that the upper bound vector has a 0 value on this level of course (vice versa is also possible but either way, having at least one of both, lower or upper bound, makes this easy)

  3. Setting up enough of these inequalities and at the same time merging a couple of those into a single inequality may create a sufficient matrix to solve this.

Hope this helps a bit, It did the job for my problem.