0

I'm trying to use scipy.optimize.minimize to do simulated method of moments estimation.

Here is my code:

import numpy as np
from scipy.optimize import minimize
import matplotlib as plt

n = 10000

c = np.tile(np.random.normal(size=(n,1)),(1,2))
k = np.random.normal(size=(n,2))

def generate_moments(params,c,k):
    mu = params[0]
    sigma = params[1]
    util = mu + (sigma*c) + k
    choice = util > 0

    #a = np.zeros((2,1))
    #a[0,0] = np.mean(choice[choice[:,0]==0,1])
    #a[1,0] = np.mean(choice[choice[:,0]==1,1])
    #return a
    return (np.mean(choice[choice[:,0]==0,1]),np.mean(choice[choice[:,0]==1,1]))

#moments: p(choice|prior not-choice) = .07; p(choice|prior choice) = .35
def SMM_obj(params,W,c,k):    
    sim_moments = generate_moments(params,c,k)
    diff = sim_moments - np.array([0.07,0.35])
    return np.matmul(np.matmul(diff,W),diff)

W = np.identity(2)
estimates = minimize(SMM_obj,[0,1],args=(W,c,k),options={'disp':True})

But the output I get is:

Optimization terminated successfully.
         Current function value: 0.167197
         Iterations: 0
         Function evaluations: 4
         Gradient evaluations: 1

estimates
Out[108]: 
      fun: 0.16719656582010595
 hess_inv: array([[1, 0],
       [0, 1]])
      jac: array([0., 0.])
  message: 'Optimization terminated successfully.'
     nfev: 4
      nit: 0
     njev: 1
   status: 0
  success: True
        x: array([0., 1.])

Where minimize has stopped immediately and spit out my x0.

One thing I was worried about was that my objective function isn't smooth (as mentioned in prior threads like this: Scipy Optimize is only returning x0, only completing one iteration)

However, I tested that in the following way:

mu_values = np.linspace(-3,3,500)
y_values = np.zeros_like(mu_values)
for i in range(len(mu_values)):
    mu = mu_values[i]
    params = [mu,1]
    y = obj(params,W,c,k)
    y_values[i] = y
plt.pyplot.scatter(mu_values,y_values)

sigma_values = np.linspace(0,5,500)
y_values = np.zeros_like(sigma_values)
for i in range(len(sigma_values)):
    sigma = sigma_values[i]
    params = [0,sigma]
    y = obj(params,W,c,k)
    y_values[i] = y
plt.pyplot.scatter(sigma_values,y_values)

Producing the following (pretty smooth-looking) figures, showing that (0,1) is clearly not the optimum:

Scatterplot of mu values

Scatterplot of sigma values

Any thoughts on how to diagnose the problem here?

ZCB
  • 1
  • 1

1 Answers1

0

I think the issue is that the Jacobian in the first iteration is already 0. This is probably due to the epsilon value in the finite difference calculations resulting in a value of 0. You can change it to something a bit bigger (e.g. 1e-2):

for eps in [1e-4, 1e-3, 1e-2, 1e-1]: 
    print(minimize(SMM_obj,[0,1],args=(W,c,k), method="BFGS",
          options={"eps":eps, "disp":True})) 

This yields the following output

Optimization terminated successfully.
         Current function value: 0.168418
         Iterations: 0
         Function evaluations: 4
         Gradient evaluations: 1
      fun: 0.16841790902419518
 hess_inv: array([[1, 0],
       [0, 1]])
      jac: array([0., 0.])
  message: 'Optimization terminated successfully.'
     nfev: 4
      nit: 0
     njev: 1
   status: 0
  success: True
        x: array([0., 1.])
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.099087
         Iterations: 2
         Function evaluations: 306
         Gradient evaluations: 74
      fun: 0.09908670347481911
 hess_inv: array([[0.10147936, 0.29731345],
       [0.29731345, 0.92692552]])
      jac: array([ 0.03098342, -0.02244792])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 306
      nit: 2
     njev: 74
   status: 2
  success: False
        x: array([-0.47451578,  1.1533294 ])
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.000186
         Iterations: 4
         Function evaluations: 312
         Gradient evaluations: 75
      fun: 0.00018570814853951565
 hess_inv: array([[14.71382814, -5.44336315],
       [-5.44336315,  3.51059358]])
      jac: array([-0.00271557, -0.00365208])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 312
      nit: 4
     njev: 75
   status: 2
  success: False
        x: array([-1.83809799,  1.02165842])
Warning: Desired error not necessarily achieved due to precision loss.
         Current function value: 0.006443
         Iterations: 1
         Function evaluations: 335
         Gradient evaluations: 81
      fun: 0.006442705723897641
 hess_inv: array([[5.08471669, 0.91869031],
       [0.91869031, 1.20508609]])
      jac: array([ 0.01807087, -0.01552747])
  message: 'Desired error not necessarily achieved due to precision loss.'
     nfev: 335
      nit: 1
     njev: 81
   status: 2
  success: False
        x: array([-1.25537353,  0.6933128 ])

Jose
  • 2,089
  • 2
  • 23
  • 29
  • Thanks for the tip, Jose. This did not really end up solving the problem--everything remained quite sensitive to initial conditions. I instead switched to Nelder-Mead, and increased n by a factor of 100. Even with the n change, BFGS produced estimates with a much higher objective function guess. – ZCB Jun 16 '20 at 01:34
  • It is problematic that solution ends up flapping around just by changing the epsilon – Jose Jun 17 '20 at 10:58