0

I'm using sum(self.eval_h(self.x) * self.w.reshape(1, self.n)) / sum(self.w) to do a dot product and

where eval_h is defined as

def eval_h(self, x):
    """
    evaluate h
    """
    if "h_params" in self.kwargs:
        if x.ndim == 1:
            return self.h(x, **self.kwargs["h_params"])
        else:
            return apply_along_axis(lambda x: self.h(x, **self.kwargs["h_params"]), 1, x)
    else:
        if x.ndim == 1:
            return self.h(x)
        else:
            return apply_along_axis(lambda x: self.h(x), 1, x)

and x is of shape

def pop_x(self):
    """
    populate x with zeros
    """
    self.x = zeros((self.n, self.d))

The result turns out to be different when x has one dimension from x has more than one dimension.

In case x has more than one dimension, self.w.reshape(1, self.n) gives the correct result. In case x has only one dimension, self.w.reshape(self.n, 1) gives the correct result. Actually I don't think I really understand broadcasting so every time I'm seeing it I'm just using it by luck.

monotonic
  • 394
  • 4
  • 20
  • This is a very good explanation of broadcasting http://scipy.github.io/old-wiki/pages/EricsBroadcastingDoc – Joe Jul 26 '18 at 12:05
  • It would be easier to force `x` to have consistent dimensions at the start, than trying accommodate 1 vs 2d at several points. – hpaulj Jul 26 '18 at 14:28

0 Answers0