i have a dataset composed by 2 images for observation. The images have shape (1, 128, 118), they are greyscaled images and there are 11 classes to classify for this problem. What's the best to go with a CNN with data like this? How could i optimally define for example the number of layers of my CNN, padding or not, stride shape, how many pooling layers should i use? Is better max pooling or average pooling?
This is the actual configuration of my model:
def create_model(features):
with C.layers.default_options(init=C.glorot_uniform(), activation=C.ops.relu, pad= True):
h = features
h = C.layers.Convolution2D(filter_shape = (5,5),
num_filters=8, strides = (2,2),
pad=True, name = 'first_conv')(h)
h = C.layers.AveragePooling(filter_shape = (5,5), strides=(2,2))(h)
h = C.layers.Convolution2D(filter_shape = (5,5), num_filters=16, pad = True)(h)
h = C.layers.AveragePooling(filter_shape = (5,5), strides=(2,2))(h)
h = C.layers.Convolution2D(filter_shape = (5,5), num_filters=32, pad = True)(h)
h = C.layers.AveragePooling(filter_shape = (5,5), strides=(2,2))(h)
h = C.layers.Dense(96)(h)
h = C.layers.Dropout(dropout_rate=0.5)(h)
r = C.layers.Dense(num_output_classes, activation= None, name='classify')(h)
return r
z = create_model(x)
# Print the output shapes / parameters of different components
print("Output Shape of the first convolution layer:", z.first_conv.shape)
print("Bias value of the last dense layer:", z.classify.b.value)
I've been experimenting and tweaking the configuration a bit, changing parameter values, adding and removing layers, but my CNN seems not to be learning from my data, it converges to a certain point in the best case, and then it hits a wall and the error stops being reduced.
I have found that the learning_rate
and the num_minibatches_to_train
parameters are important. I have actually set learning_rate = 0.2
and num_minibatches_to_train = 128
I'm also using sgd
as the learner. Here's a sample of my last output results:
Minibatch: 0, Loss: 2.4097, Error: 95.31%
Minibatch: 100, Loss: 2.3449, Error: 95.31%
Minibatch: 200, Loss: 2.3751, Error: 90.62%
Minibatch: 300, Loss: 2.2813, Error: 78.12%
Minibatch: 400, Loss: 2.3478, Error: 84.38%
Minibatch: 500, Loss: 2.3086, Error: 87.50%
Minibatch: 600, Loss: 2.2518, Error: 84.38%
Minibatch: 700, Loss: 2.2797, Error: 82.81%
Minibatch: 800, Loss: 2.3234, Error: 84.38%
Minibatch: 900, Loss: 2.2542, Error: 81.25%
Minibatch: 1000, Loss: 2.2579, Error: 85.94%
Minibatch: 1100, Loss: 2.3469, Error: 85.94%
Minibatch: 1200, Loss: 2.3334, Error: 84.38%
Minibatch: 1300, Loss: 2.3143, Error: 85.94%
Minibatch: 1400, Loss: 2.2934, Error: 92.19%
Minibatch: 1500, Loss: 2.3875, Error: 85.94%
Minibatch: 1600, Loss: 2.2926, Error: 90.62%
Minibatch: 1700, Loss: 2.3220, Error: 87.50%
Minibatch: 1800, Loss: 2.2693, Error: 87.50%
Minibatch: 1900, Loss: 2.2864, Error: 84.38%
Minibatch: 2000, Loss: 2.2678, Error: 79.69%
Minibatch: 2100, Loss: 2.3221, Error: 92.19%
Minibatch: 2200, Loss: 2.2033, Error: 87.50%
Minibatch: 2300, Loss: 2.2493, Error: 87.50%
Minibatch: 2400, Loss: 2.4446, Error: 87.50%
Minibatch: 2500, Loss: 2.2676, Error: 85.94%
Minibatch: 2600, Loss: 2.3562, Error: 85.94%
Minibatch: 2700, Loss: 2.3290, Error: 82.81%
Minibatch: 2800, Loss: 2.3767, Error: 87.50%
Minibatch: 2900, Loss: 2.2684, Error: 76.56%
Minibatch: 3000, Loss: 2.3365, Error: 90.62%
Minibatch: 3100, Loss: 2.3369, Error: 90.62%
Any suggestions to improve my results? I'm open to any hints/exploration.
Thank you in advance