I am new to MXNet (I am using it in Python3)
Their tutorial series encourages you define your own gluon
blocks.
So lets say this is your block (a common convolution structure):
class CNN1D(mx.gluon.Block):
def __init__(self, **kwargs):
super(CNN1D, self).__init__(**kwargs)
with self.name_scope():
self.cnn = mx.gluon.nn.Conv1D(10, 1)
self.bn = mx.gluon.nn.BatchNorm()
self.ramp = mx.gluon.nn.Activation(activation='relu')
def forward(self, x):
x = mx.nd.relu(self.cnn(x))
x = mx.nd.relu(self.bn(x))
x = mx.nd.relu(self.ramp(x))
return x
This is mirror the structure of their example.
What is the difference of mx.nd.relu
vs mx.gluon.nn.Activation
?
Should it be
x = self.ramp(x)
instead of
x = mx.nd.relu(self.ramp(x))