0

I have a Python code that aims to build several multi-fidelity models (one for each of several variables) and use Emukit's experimental design functions to update them iteratively. I am using simple uncertainty acquisition (ModelVariance) and the multi-fidelity-wrapped gradient optimizer as shown in the examples here and here. I started by applying this technique to only one of my several variables. When doing that I noticed that 1) all update points (x_new) seemed to be selected from the LF model and 2) the variance dropped precipitously everywhere after adding only a single update point. I shrugged this off initially, and applied the technique to all my variables (using a loop over a dictionary to do each variable in turn). When I did that, I discovered that the mean predictions (new model points) seemed perfectly reasonable, but the reported variances using .predict() for ALL the models of ALL the variables were exactly the same, and were in fact what I had been given by the program when just doing the single variable. Something seems to be going very wrong finding and updating the variances after adding a new training point and using .set_data to update the model and I am not sure what or where the problem is. Is there an emukit bug? Am I using an incorrect setting? Is the problem with my dictionaries or for-loops? I am at a loss. Can anyone offer some insight?

Here is the code I currently have, somewhat redacted. I am sorry that it's such a long read....


    # SKIPPING GENERAL IMPORTS

    def make_mf(x,y,kernel,fidels):
    # Generic multifidelity model builder.  
    # Returns a mutlifidelity model built based on the training points (x and y),
    #   kernels, and number of fidelities

        mf_lin_model=GPyLinearMultiFidelityModel(x, y,kernel, n_fidelities=fidels)

    # set up loop to fix noise to 0 for all fidelities, indicating training points are exact
        for i in range(fidels):
            if i == 0:
                caller = "mf_lin_model.mixed_noise.Gaussian_noise.fix(0)"
            else:
                caller = "mf_lin_model.mixed_noise.Gaussian_noise_" + str(i) + ".fix(0)"
            eval(caller)

    ## Wrap the model using the given 'GPyMultiOutputWrapper'
       
        mf_model= model = GPyMultiOutputWrapper(mf_lin_model, 2, n_optimization_restarts=5,verbose_optimization=False)
        
    # Fit the model

        mf_model.optimize()
    # Return the final model to the calling procedure
        return(mf_model)


    np.random.seed(20)

    # list of y (result variables)
    yvars=["VAR1","VAR2","VAR3"]
    #list of x (input) variables
    xvars=["XVAR"]
    # list of fidelity levels.  levels should be in order of ascending fidelity (0=lowest)
    levels=["lf","hf"]
    # list of what we'll need to store for each variable and level 
    #   these are the model itself, the predicted values for plotting,
        # and the predicted values at the training points
    contents=['surrogate','y_plot','y_train']
    # list of medium_fidelity variables
        # these are the training coordintaes, the model, predicted values for plotting,
        # predicted variances, the maximum and mean variance, and predicted
        # values at the training points
    multifivars=['y_plot','variance','varmax','varmean','pl_train']
    mainvars=['model','x_train','y_train']
    # set up a dictionary to store the models and related results for each y-variable
    #   and each fidelity
    MyModels={key:{lkey:{ckey:None for ckey in contents} for lkey in levels} for key in yvars}
    # Set up a dictionary for the multi-fidelity models
    MultiFidelity={key:{vkey: None for vkey in mainvars}for key in yvars}
    for key in MultiFidelity.keys():
        for level in levels:
            MultiFidelity[key][level]={mkey:None for mkey in multifivars}
    #set up a dictionary to easily access data 
    MyData={key:None for key in levels}
    # set up a dictionaries to easily access training and plotting points
    x_train={key:None for key in levels}
    Y_plot={key:None for key in levels}
    T_plot={key:None for key in levels}
   
    # Number of initial points evaluated at each fidelity level 
    npoints=[5,2]
    MyPoints={levels[i]:npoints[i] for i in range(len(levels))}

    ## SKIPPED THE SECTION WHERE I READ IN THE RAW DATA

    # High sampling of models for plotting of functions
    x_plot = np.linspace(2, 16, 200)[:, None]


    # set up points for plotting and retrieving MF model
    X_plot = convert_x_list_to_array([x_plot, x_plot])
    for i in range(len(levels)):
        Y_plot[levels[i]] = X_plot[i*len(x_plot):(i+1)*len(x_plot)]
    Y_plot_h=X_plot[len(x_plot):]

    # Sampling for training for multi-fidelity analysis

    x_train[levels[0]] = np.atleast_2d(np.random.rand(MyPoints[levels[0]])*14+2).T

    for i in range (1,len(levels)):
        x_train[levels[i]] = np.atleast_2d(np.random.permutation(x_train[levels[i-1]])[:MyPoints[levels[i]]])
    #x_train_h = np.atleast_2d([3, 9.5, 11, 15]).T


    # set up points for plotting mf result at training points
    X_train=convert_x_list_to_array([x_train[levels[0]],x_train[levels[0]]])
    for i in range(len(levels)):
        T_plot[levels[i]] = X_train[i*len(x_train[levels[0]]):(i+1)*len(x_train[levels[0]])]
    #print(X_train)

    # combine the training points of all fidelity levels into a list of arrays
    xtemp=[]
    for level in levels:
        xtemp.append(x_train[level])



    kernels = [GPy.kern.RBF(1), GPy.kern.RBF(1)]
    lin_mf_kernel = emukit.multi_fidelity.kernels.LinearMultiFidelityKernel(kernels)


    for var in MyModels.keys():
        ytemp=[]
        for level in levels:        
    # use SciPy interpolate to build surrogate for given variable and fidelity level
            MyModels[var][level]['surrogate']=interpolate.interp1d(MyData[level]['Coll'],MyData[level][var])
    # find y-values for training MF points and append to a list of arrays
            MyModels[var][level]['y_train']=MyModels[var][level]['surrogate'](x_train[level])
            ytemp.append(MyModels[var][level]['y_train'])
            MyModels[var][level]['y_plot']=MyModels[var][level]['surrogate'](x_plot)        
    ## Convert lists of arrays to ndarrays augmented with fidelity indicators
        MultiFidelity[var]['x_train'],MultiFidelity[var]['y_train']=convert_xy_lists_to_arrays(xtemp,ytemp)
    # Build the multi-fidelity model
    ## Construct a linear multi-fidelity model
        MultiFidelity[var]['model']= make_mf(MultiFidelity[var]['x_train'], MultiFidelity[var]['y_train'], lin_mf_kernel,len(levels))
    # Get multifidelity model values and variances at plotting points
        for level in levels:
            MultiFidelity[var][level]['y_plot'],MultiFidelity[var][level]['variance']=MultiFidelity[var]['model'].predict(Y_plot[level])
    # find maximum and average variance to measure the accuracy of the MF model
            MultiFidelity[var][level]['varmax']=np.amax(MultiFidelity[var][level]['variance'])
            MultiFidelity[var][level]['varmean']=np.mean(MultiFidelity[var][level]['variance'])
            MultiFidelity[var][level]['pl_train'], _ = MultiFidelity[var]['model'].predict(T_plot[level])

    for key in MyModels.keys():
        for level in levels:
            print(key,level,MultiFidelity[key][level]['varmax'],MultiFidelity[key][level]['varmean'])


    # set up the parameter space.  we are scanning in x between 2 and 16 to match the range of my input
    parameter_space = ParameterSpace([ContinuousParameter('x', 2, 16), InformationSourceParameter(len(levels))])

    # set up how we will look for the target of our search
    optimizer = MultiSourceAcquisitionOptimizer(GradientAcquisitionOptimizer(parameter_space), parameter_space)

    # Plot each variable vs X for BEFORE any new points are added
    for var in yvars:
        plot_vars(var,0)

    # Note: right now I am basing the aquisition function on the first variable ONLY.  I intend to
    build a more complex function later when I get these bugs worked out.
    acquisition=ModelVariance(MultiFidelity[yvars[0]]['model'])
      
    # perform optimization to find the target point
    x_new, val = optimizer.optimize(acquisition)
    # x_new=np.atleast_2d(0)
    # x_new[0][0]=np.random.rand()*14+2
    print('first update points is',x_new)

    # I want to manually specify that I add one HF training point and 4 LF training points,
        # hence the way the following code is built.   This could be a source of problems?

    # construct our own version of the new data point because we will want it from the HF surrogate model 
    #   (hence the value 1 in the final column)
    new_point_x_hi = [[x_new[0][0],1.]]
    # also, since this is an HF point, we include it as a training point in the LF model
    new_point_x_lo = [[x_new[0][0],0.]]
    # # we also append the new x-value to the training point x-array
    x_train[levels[0]]=np.append(x_train[levels[0]],[[x_new[0][0]]],axis=0)
    x_train[levels[1]]=np.append(x_train[levels[1]],[[x_new[0][0]]],axis=0)

    # next, prepare points to allow the plotting of the training points on each model
    X_train=convert_x_list_to_array([x_train[levels[0]],x_train[levels[0]]])
    for i in range(len(levels)):
        T_plot[levels[i]] = X_train[i*len(x_train[levels[0]]):(i+1)*len(x_train[levels[0]])]

    for var in yvars:
    # Now, for every variable in our list we add training points and update the models
        # find the corresponding y-values from the respective surrogates
        new_point_y_hi = np.atleast_2d(MyModels[var]['hf']['surrogate'](x_new[0][0]))
        new_point_y_lo = np.atleast_2d(MyModels[var]['lf']['surrogate'](x_new[0][0]))
        # Note that, as usual, we make these into 2D arrays to match EMUKit's formatting
        
 
        # now append the new point to our model's training data arrays
        MultiFidelity[var]['x_train']=np.append(MultiFidelity[var]['x_train'],new_point_x_hi,axis=0)
        MultiFidelity[var]['y_train']=np.append(MultiFidelity[var]['y_train'],new_point_y_hi,axis=0)
        MultiFidelity[var]['x_train']=np.append(MultiFidelity[var]['x_train'],new_point_x_lo,axis=0)
        MultiFidelity[var]['y_train']=np.append(MultiFidelity[var]['y_train'],new_point_y_lo,axis=0)
        
        
        # now we use .set_data to update the model based on the extended training data
    #    MultiFidelity[var]['model']= make_mf(MultiFidelity[var]['x_train'], MultiFidelity[var]['y_train'], lin_mf_kernel,len(levels))
        MultiFidelity[var]['model'].set_data(MultiFidelity[var]['x_train'],MultiFidelity[var]['y_train'])
        # and finally, re-calculate the values and variances at our plotting points to create an updated plot
        # MultiFidelity[var]['lf']['y_plot'],MultiFidelity[var]['lf']['variance']=MultiFidelity[var]['model'].predict(Y_plot['lf'])
        # MultiFidelity[var]['hf']['y_plot'],MultiFidelity[var]['hf']['variance']=MultiFidelity[var]['model'].predict(Y_plot['hf'])
        # MultiFidelity[var]['hf']['pl_train'], _ = MultiFidelity[var]['model'].predict(T_plot['hf'])
        # not forgetting to update the maximum and average variances
        for level in levels:
    #  get new plotting points 
            MultiFidelity[var][level]['y_plot'],MultiFidelity[var][level]['variance']=MultiFidelity[var]['model'].predict(Y_plot[level])
            MultiFidelity[var][level]['pl_train'], _ = MultiFidelity[var]['model'].predict(T_plot[level])
    # find maximum and average variance to measure the accuracy of the MF model
            MultiFidelity[var][level]['varmax']=np.amax(MultiFidelity[var][level]['variance'])
            MultiFidelity[var][level]['varmean']=np.mean(MultiFidelity[var][level]['variance'])
    # report maximum and avarage variance
            print(var,level,'max = ',MultiFidelity[var][level]['varmax'],'mean = ', MultiFidelity[var][level]['varmean'])
        #  Plot each variable vs Coll for rcas, helios and the low and high-fidelity models for aftr HF point added
        plot_vars(var,1)


    # NOW DID THE SAME THING FOR A SEQUENCE OF 4 LF POINTS            


I have tried using different acquisition functions and got the same behavior. I have also tried rebilding the model from scratch using model.optimize() and only got stranger behavior.

lwlzr
  • 1

0 Answers0