I am using Tensorflow Adam method to optimize a stochastic function, which has (almost) nothing to do with neural networks, but rather with probabilistic inference.
Adam works pretty well in finding global optima of the cost function, however my variables are bounded and Adam has no way to implement bounds, as it is a unconstrained optimization method. In my case I would like to keep the variables all positive.
How is it possible to add bounds to stochastic gradient descent based methods in general? Are there already-implemented solutions to something that I think is a rather common problem?