I'm using the Chainer DCGAN example file found at https://github.com/chainer/chainer/blob/master/examples/dcgan/train_dcgan.py . It works fine for 32x32 images, but for other resolutions, the README.md instructs to modify the network architecture in net.py.
As I understand it from reading the documentation, the size of the training images is sent as a parameter to the constructor to the Generator class, as bottom_width, and ch. Here is the code for a 32x32.
class Generator(chainer.Chain): def __init__(self, n_hidden, bottom_width=4, ch=512, wscale=0.02):
I'm confused as to how this translates to 32x32, and how to modify this to other resolutions. Any help would be greatly appreciated.