I'm working on a custom multi-digit number recognition problem. I only have 1 and 2 digit numbers. I am using a VGG16 model with two heads for each digit individually to avoid having 100 classes.
THe model looks like this:
input_shape = (256,96,3)
base_model = VGG16(weights='imagenet', include_top=False, input_shape = input_shape)
xo = base_model.output
x = base_model.input
flat = Flatten(name = 'flat')(xo)
h1 = Dense(1024, activation='relu', name = 'first_hidden_layer')(flat)
d1 = Dropout(0.5, name = 'first_hidden_dropout')(h1)
h2 = Dense(1024, activation='relu', name = 'second_hidden_layer')(d1)
d2 = Dropout(0.5, name = 'second_hidden_dropout')(h2)
o_digit1 = Dense(11, activation='softmax', name = 'digit1_classification')(d2)
o_digit2 = Dense(11, activation='softmax', name = 'digit2_classification')(d2)
model = Model(inputs = x, outputs = [o_digit1, o_digit2] )
opt = Adam(lr=0.0001)
model.compile(optimizer=opt,
loss='categorical_crossentropy',
metrics={'digit1_classification': 'accuracy',
'digit2_classification': 'accuracy'},
loss_weights = {'digit1_classification': 0.5,
'digit2_classification': 0.5})
I would like to build a custom metric to pass to the model. Compile that calculates the actual number accuracy. Normally when you build your own metric function you pass y_pred
and y_true
to it.
For example
def my_metric1(y_true,y_pred):
return calculations (y_true, y_pred)
I could use my_metric1
to calculate whatever I want on each class individually, but what I want is for it to calculate actual full number accuracy.
Something of this sort:
def my_metric2(y_pred1, y_true1, y_pred2, y_true2):
return calculations2(y_pred1, y_true1, y_pred2, y_true2)
Here y_pred1, y_true1, y_pred2, y_true2
are predictions and true values for each digit individually.
How can I achieve that?