I'm using
sum(self.eval_h(self.x) * self.w.reshape(1, self.n)) / sum(self.w)
to do a dot product and
where eval_h
is defined as
def eval_h(self, x):
"""
evaluate h
"""
if "h_params" in self.kwargs:
if x.ndim == 1:
return self.h(x, **self.kwargs["h_params"])
else:
return apply_along_axis(lambda x: self.h(x, **self.kwargs["h_params"]), 1, x)
else:
if x.ndim == 1:
return self.h(x)
else:
return apply_along_axis(lambda x: self.h(x), 1, x)
and x
is of shape
def pop_x(self):
"""
populate x with zeros
"""
self.x = zeros((self.n, self.d))
The result turns out to be different when x
has one dimension from x
has more than one dimension.
In case x
has more than one dimension, self.w.reshape(1, self.n)
gives the correct result. In case x
has only one dimension, self.w.reshape(self.n, 1)
gives the correct result. Actually I don't think I really understand broadcasting so every time I'm seeing it I'm just using it by luck.