-1

I am using Autograd to compute the gradient of a float valued function. The function involves an array of arrays as arguments, and returns a float, and is quite complicated. A minimal example which produces this error is the function in the following code:

import autograd.numpy as np 
from autograd import grad


def mod(param):
    '''
    param: Is an array of the form e.g. [0.1, [0.1,0.2]], where the second term in the list is an 
    array, 
    and the first term is a float. 
    '''


    return param[0]+np.sum(np.array(param[1]))

I read the Autograd documentation and it seems I am doing things correctly since I am casting 'param[1]' explicitly as an array. When running the following:

dmod = grad(mod)

x = np.array([0.1,np.array([0.1,0.1])])

dmod(x)

I get the error message:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-289-5fc18f1d6a09> in <module>
      3 x = np.array([0.1,np.array([0.1,0.1])])
      4 
----> 5 dmod(x)

~\Anaconda3\lib\site-packages\autograd\wrap_util.py in nary_f(*args, **kwargs)
     18             else:
     19                 x = tuple(args[i] for i in argnum)
---> 20             return unary_operator(unary_f, x, *nary_op_args, **nary_op_kwargs)
     21         return nary_f
     22     return nary_operator

~\Anaconda3\lib\site-packages\autograd\differential_operators.py in grad(fun, x)
     27         raise TypeError("Grad only applies to real scalar-output functions. "
     28                         "Try jacobian, elementwise_grad or holomorphic_grad.")
---> 29     return vjp(vspace(ans).ones())
     30 
     31 @unary_to_nary

~\Anaconda3\lib\site-packages\autograd\core.py in vjp(g)
     12         def vjp(g): return vspace(x).zeros()
     13     else:
---> 14         def vjp(g): return backward_pass(g, end_node)
     15     return vjp, end_value
     16 

~\Anaconda3\lib\site-packages\autograd\core.py in backward_pass(g, end_node)
     21         ingrads = node.vjp(outgrad[0])
     22         for parent, ingrad in zip(node.parents, ingrads):
---> 23             outgrads[parent] = add_outgrads(outgrads.get(parent), ingrad)
     24     return outgrad[0]
     25 

~\Anaconda3\lib\site-packages\autograd\core.py in add_outgrads(prev_g_flagged, g)
    174     else:
    175         if sparse:
--> 176             return sparse_add(vspace(g), None, g), True
    177         else:
    178             return g, False

~\Anaconda3\lib\site-packages\autograd\tracer.py in f_wrapped(*args, **kwargs)
     46             return new_box(ans, trace, node)
     47         else:
---> 48             return f_raw(*args, **kwargs)
     49     f_wrapped.fun = f_raw
     50     f_wrapped._is_autograd_primitive = True

~\Anaconda3\lib\site-packages\autograd\core.py in sparse_add(vs, x_prev, x_new)
    184 def sparse_add(vs, x_prev, x_new):
    185     x_prev = x_prev if x_prev is not None else vs.zeros()
--> 186     return x_new.mut_add(x_prev)
    187 
    188 class VSpace(object):

~\Anaconda3\lib\site-packages\autograd\numpy\numpy_vjps.py in mut_add(A)
    696         idx = onp.array(idx, dtype='int64')
    697     def mut_add(A):
--> 698         onp.add.at(A, idx, x)
    699         return A
    700     return SparseObject(vs, mut_add)

ValueError: array is not broadcastable to correct shape



---------------------------------------------------------------------

I am using an IPython notebook and my version of Autograd is 1.3.

Any help is much appreciated!

abcde799
  • 1
  • 1

1 Answers1

0

I think the problem was that the input which produced an error is a numpy array. If the list x = [0.1,[0.1,0.1]] is passed into the gradient 'dmod' instead, then the output looks correct.

abcde799
  • 1
  • 1