0

I am trying to write a vanilla autoencoder for compressing 13 images. However I am getting the following error:

ValueError: train argument is not supported anymore. Use chainer.using_config

The shape of images is (21,28,3).

filelist = 'ex1.png', 'ex2.png',...11 other images
x = np.array([np.array(Image.open(fname)) for fname in filelist])
xs = x.astype('float32')/255.

class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
  # encoder part
      self.l1 = L.Linear(1764,800)
      self.l2 = L.Linear(800,300)
  # decoder part
      self.l3 = L.Linear(300,800)
      self.l4 = L.Linear(800,1764)
      self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x, train=True):
      h = F.dropout(self.activation(self.l1(x)), train=train)
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)

n_epoch = 5
batch_size = 2
model = Autoencoder()

optimizer = optimizers.SGD(lr=0.05).setup(model)
train_iter = iterators.SerialIterator(xs,batch_size)
valid_iter = iterators.SerialIterator(xs,batch_size)

updater = training.StandardUpdater(train_iter,optimizer)
trainer = training.Trainer(updater,(n_epoch,"epoch"),out="result")

from chainer.training import extensions
trainer.extend(extensions.Evaluator(valid_iter, model, device=gpu_id))

trainer.run()

Is the issue because of the number of nodes in the model or otherwise?

TulakHord
  • 422
  • 7
  • 15

1 Answers1

2

You need to wirte "decoder" part.

When you take mean_squared_error loss, the shape of h and x must be same. AutoEncoder will encode original x to small space (100-dim) h, but after that we need to reconstruct x' from this h by adding decoder part. Then loss can be calculated on this reconstructed x'.

For example, as follows (sorry i have not test it to run)

  • For Chainer v2~

train argument is handled by global configs, so you do not need train argument in dropout function.

class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
      # encoder part
      self.l1 = L.Linear(1308608,500)
      self.l2 = L.Linear(500,100)
      # decoder part
      self.l3 = L.Linear(100,500)
      self.l4 = L.Linear(500,1308608)
  self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x):
      h = F.dropout(self.activation(self.l1(x)))
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)
  • For Chainer v1
class Autoencoder(Chain):
  def __init__(self, activation=F.relu):
    super().__init__()
    with self.init_scope():
      # encoder part
      self.l1 = L.Linear(1308608,500)
      self.l2 = L.Linear(500,100)
      # decoder part
      self.l3 = L.Linear(100,500)
      self.l4 = L.Linear(500,1308608)
  self.activation = activation

  def forward(self,x):
      h = self.encode(x)
      x_recon = self.decode(h)
      return x_recon

  def __call__(self,x):
      x_recon = self.forward(x)
      loss = F.mean_squared_error(h, x)
      return loss

  def encode(self, x, train=True):
      h = F.dropout(self.activation(self.l1(x)), train=train)
      return self.activation(self.l2(x))

  def decode(self, h, train=True):
      h = self.activation(self.l3(h))
      return self.l4(x)

You can also refer official Variational Auto Encoder example for the next step:

corochann
  • 1,604
  • 1
  • 13
  • 24
  • thanks for pointing out the issue in the model section..and how to define activation = relu here? – TulakHord Apr 25 '19 at 07:51
  • Thanks again..however the original problem still persists...now I tried with a much smaller sized image..28*21 = 588 and shape (21,28,3). The error: – TulakHord Apr 25 '19 at 09:15
  • 1
    Can you edit your question to add your next error on the bottom of your question? (please do not delete original question, just add next error). – corochann Apr 25 '19 at 09:17
  • https://stackoverflow.com/questions/55844644/stacked-autoencoder is this next error? – corochann Apr 25 '19 at 09:17
  • I have updated the code and the problem statement...the link you sent is a different problem..although I would like your help in that one too if possible. – TulakHord Apr 25 '19 at 09:39
  • 1
    I updated my answer again. It seems you are using newer Chainer version, in that case you do not need `train` argument. – corochann Apr 26 '19 at 00:51