-2

I am trying to solve a problem for a deep learning class and the block of code i have to modify looks like this

def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
    """ Define a tf.keras model for binary classification out of the MobileNetV2 model
    Arguments:
        image_shape -- Image width and height
        data_augmentation -- data augmentation function
    Returns:
        tf.keras.model
    """
    
    
    input_shape = image_shape + (3,)
    
    # START CODE HERE


    base_model=tf.keras.applications.MobileNetV2(input_shape=input_shape, include_top=False, weights="imagenet")

    # Freeze the base model by making it non trainable
    base_model.trainable = None 

    # create the input layer (Same as the imageNetv2 input size)
    inputs = tf.keras.Input(shape=None) 
    
    # apply data augmentation to the inputs
    x = None
    
    # data preprocessing using the same weights the model was trained on
    x = preprocess_input(None) 
    
    # set training to False to avoid keeping track of statistics in the batch norm layer
    x = base_model(None, training=None) 
    
    # Add the new Binary classification layers
    # use global avg pooling to summarize the info in each channel
    x = None()(x) 
    #include dropout with probability of 0.2 to avoid overfitting
    x = None(None)(x)
        
    # create a prediction layer with one neuron (as a classifier only needs one)
    prediction_layer = None
    
    # END CODE HERE
    
    outputs = prediction_layer(x) 
    model = tf.keras.Model(inputs, outputs)
    
    return model

IMG_SIZE = (160, 160)
def data_augmentation():
    data = tl.keras.Sequential()
    data.add(RandomFlip("horizontal")
    data.add(RandomRotation(0.2)
    return data

I tried 3 times starting from that template following the directions and a lot of trial and error. I don't know what I am missing. I have gotten it to the point where it train a model and I can get the summary of it, but the summary is not correct.

Please help, I am going crazy trying to figure this out. I know it is super simple, but its the simple problems that trip me up.

furas
  • 134,197
  • 12
  • 106
  • 148
dbiber
  • 39
  • 1
  • 4
  • Edit the string comment in the top. It makes the whole code look string. – Anurag Dhadse May 28 '21 at 12:25
  • if you follow some tutorial then show link in question (not in comment). If you get errors then show full error messages in question (not in comment). And if you have any other information then show it in queston - we can't run your code and we can't read in your mind so you have to put all details in questions. – furas May 28 '21 at 15:27
  • if you got wrong summary then you have to show data in question, summary which you get and expected summary. We can't help you if we can't test it adn see problem. – furas May 28 '21 at 15:29
  • why do you use name `None()` ? it makes no sense. `None` is special value in Python and you can't use it as function. It should gives you always error message. – furas May 28 '21 at 15:29

3 Answers3

1

You might have to use the below code to run your algorithm.

input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                               include_top=False, # <== Important!!!!
                                               weights='imagenet') # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x) 

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=False) 

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tf.keras.layers.GlobalAveragePooling2D()(x)
#include dropout with probability of 0.2 to avoid overfitting
x = tf.keras.layers.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tf.keras.layers.Dense(1 ,activation='linear')(x)

### END CODE HERE

outputs = prediction_layer
model = tf.keras.Model(inputs, outputs)
0

I had the same issue but my mistake was putting (x) in the dense layer before the end, here is the code that worked for me:

def alpaca_model(image_shape=IMG_SIZE, data_augmentation=data_augmenter()):
''' Define a tf.keras model for binary classification out of the MobileNetV2 model
Arguments:
    image_shape -- Image width and height
    data_augmentation -- data augmentation function
Returns:
Returns:
    tf.keras.model
'''


input_shape = image_shape + (3,)

### START CODE HERE

base_model = tf.keras.applications.MobileNetV2(input_shape=input_shape,
                                               include_top=False, # <== Important!!!!
                                               weights='imagenet') # From imageNet

# Freeze the base model by making it non trainable
base_model.trainable = False 

# create the input layer (Same as the imageNetv2 input size)
inputs = tf.keras.Input(shape=input_shape) 

# apply data augmentation to the inputs
x = data_augmentation(inputs)

# data preprocessing using the same weights the model was trained on
x = preprocess_input(x) 

# set training to False to avoid keeping track of statistics in the batch norm layer
x = base_model(x, training=False) 

# Add the new Binary classification layers
# use global avg pooling to summarize the info in each channel
x = tfl.GlobalAveragePooling2D()(x) 
#include dropout with probability of 0.2 to avoid overfitting
x = tfl.Dropout(0.2)(x)
    
# create a prediction layer with one neuron (as a classifier only needs one)
prediction_layer = tfl.Dense(1, activation = 'linear')

### END CODE HERE

outputs = prediction_layer(x) 
model = tf.keras.Model(inputs, outputs)

return model
Fabian Pino
  • 86
  • 1
  • 5
0

Under def data augmentation, your brackets are not well closed

  • 1
    Yes but I think that's a simple typo in the question, not in their real code. Note how they don't talk about an error message but just about not being able to get results. – Eric Aya Jul 26 '21 at 07:41