-1

How choose the best pre-trained model (transfer learning) for a particular image processing problem based on the type of problem; medical, biological and etc and even based on images; objects that not different in colors but in shapes or even are not different in shapes but in colors and any kind of situation like that?

Soroush Mirzaei
  • 331
  • 2
  • 12

1 Answers1

1

I find the EfficientNet models work very well in almost all applications. I have used it on hundreds of classification datasets with excellent results. It is also important to use a callback for adjusting the learning rate. Code below is a function to create one of 4 Efficient net models

def make_model(img_size, lr, mod_num=3):  
    img_shape=(img_size[0], img_size[1], 3)
    if mod_num == 0:
        base_model=tf.keras.applications.efficientnet.EfficientNetB0(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max')
        msg='Creating EfficientNet B0 model'
    elif mod_num == 3:
        base_model=tf.keras.applications.efficientnet.EfficientNetB3(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max') 
        msg='Creating EfficientNet B3 model'
    elif mod_num == 5:
        base_model=tf.keras.applications.efficientnet.EfficientNetB5(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max') 
        msg='Creating EfficientNet B5 model'
        
    else:
        base_model=tf.keras.applications.efficientnet.EfficientNetB7(include_top=False, weights="imagenet",input_shape=img_shape, pooling='max')
        msg='Creating EfficientNet B7 model'   
    print(msg)
    base_model.trainable=True
    x=base_model.output
    x=BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
    x = Dense(256, kernel_regularizer = regularizers.l2(l = 0.016),activity_regularizer=regularizers.l1(0.006),
                    bias_regularizer=regularizers.l1(0.006) ,activation='relu')(x)
    x=Dropout(rate=.4, seed=123)(x)       
    output=Dense(class_count, activation='softmax')(x)
    model=Model(inputs=base_model.input, outputs=output)
    model.compile(Adamax(learning_rate=lr), loss='categorical_crossentropy', metrics=['accuracy']) 
    return model

typical use is

lr=.001
img_size=(256,256)
model=make_model(img_size, lr) # using B3 model by default

I recommend using the callbacks below

rlronp=tf.keras.callbacks.ReduceLROnPlateau(
                                           monitor='val_loss',
                                           factor=0.4,
                                           patience=2,
                                           verbose=1,
                                           mode='auto')

estop=tf.keras.callbacks.EarlyStopping(
                                     monitor="val_loss",
                                     patience=4,
                                     verbose=1,
                                     mode="auto",
                                     restore_best_weights=True)
callbacks=[rlronp, estop]

In model.fit set callbacks=callbacks

Gerry P
  • 7,662
  • 3
  • 10
  • 20