0

Here is my attempt to implement the SPSA optimization for the polynomial x^4 - x^2. I recgonize my code only works for 1 dimension, but it seems to not be working at all. Also I recognize that SPSA is typically used when you dont have the function you want to minimize; it comes from measurements that contain noise like for example robot motion.Could it be perhaps that I am not using the np.random.binomial in the correct way? I used the pseudocode from this website https://www.jhuapl.edu/SPSA/PDF-SPSA/Matlab-SPSA_Alg.pdf to try and implement it. Sorry for the blocky code I am not used to stackoverflow. Please feel free to make other recommendations of how I can improve it. Thanks for your time.

import numpy as np 

def SPSA(alpha,gamma,lowa,A,c,iterations,theta):

dimension=len(theta)

ppar = [1,0, -1, 0, 0]

p = np.poly1d(ppar)

# declare vector function quantities
gradient=np.zeros(dimension)
delta=np.zeros(dimension)
delta=np.random.binomial(3,.4, dimension)
if delta==0:
    print('error delta')
else:
    print('this is our delta')
    print(delta)
# simple for loop implementation as variables iterate
    i=0
    while i<=iterations:
        ak=lowa/np.power(i+1+A,alpha)
        ck=c/np.power(i+1,gamma)
        thetaplus=theta+ck*delta
        thetaminus=theta-ck*delta
        yplus=p(thetaplus)
        yminus=p(thetaminus)
        gradient=yplus-yminus/(2*ck*delta)
        theta=theta-ak*gradient
        print('graident, theta and F(theta)')
        return gradient,theta, p(theta)
        i+=1
        if gradient==0:
            print('gradient is zero')`

1 Answers1

0

I recgonize my code only works for 1 dimension, but it seems to not be working at all.

The reason for this must be the unconditional return gradient,theta, p(theta), which always exits in the first iteration. Perhaps you rather meant to write print(…) here and the return … at the end of the function.

Armali
  • 18,255
  • 14
  • 57
  • 171