0

This is my model

engine1 = tf.keras.applications.Xception(
        # Freezing the weights of the top layer in the InceptionResNetV2 pre-traiined model
        include_top = False,

        # Use Imagenet weights
        weights = 'imagenet',

        # Define input shape to 224x224x3
        input_shape = (256, 256 , 3)

    )
    
x1 = tf.keras.layers.GlobalAveragePooling2D(name = 'avg_pool')(engine1.output)
x1 =tf.keras.layers.Dropout(0.75)(x1)
x1 = tf.keras.layers.BatchNormalization(
                      axis=-1,
                      momentum=0.99,
                      epsilon=0.01,
                      center=True,
                      scale=True,
                      beta_initializer="zeros",
                      gamma_initializer="ones",
                      moving_mean_initializer="zeros",
                      moving_variance_initializer="ones",
                  )(x1)
out1 = tf.keras.layers.Dense(3, activation = 'softmax', name = 'dense_output')(x1)


    # Build the Keras model
model1 = tf.keras.models.Model(inputs = engine1.input, outputs = out1)
    # Compile the model

model1.compile(
        # Set optimizer to Adam(0.0001)
        optimizer = tf.keras.optimizers.Adam(learning_rate= 3e-4),
        #optimizer= SGD(lr=0.001, decay=1e-6, momentum=0.99, nesterov=True),
        # Set loss to binary crossentropy
        #loss = tf.keras.losses.SparseCategoricalCrossentropy(),
        loss = 'categorical_crossentropy',
        # Set metrics to accuracy
        metrics = ['accuracy']
    )

I want logits so I wrote this

logits = model1(X_test)
probs = tf.nn.softmax(logits)

Getting error as

ResourceExhaustedError: OOM when allocating tensor with shape[1288,64,125,125] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc [Op:Conv2D]

How to fix this and get the logits? I want to apply the distillation method after getting the logits. My test set consists of 3 classes and 60 samples.

so logit matrix should be a matrix of 60 * 3.

Edit

To get the logits(1288 * 3) I made a change in the output layer of my model

out1 = tf.keras.layers.Dense(3, activation = 'linear', name = 'dense_output')(x1)

Now I am getting logits,

y_pred_logits = model1.predict(X_test)

enter image description here

I want to apply softmax on this, My softmax function looks like this,

def softmax(x):
    """Compute softmax values for each sets of scores in x."""
    e_x = np.exp(x) 
    return e_x / e_x.sum(axis=1)

But when I am doing this

y_pred_logits_activated = softmax(y_pred_logits)

Getting errors as

enter image description here

How to fix this and is this method correct? Further, I want to apply this on logits

enter image description here

XYZ
  • 225
  • 1
  • 12
  • Cross-posted: https://stackoverflow.com/q/74252515/781723, https://datascience.stackexchange.com/q/115726/8560. Please [do not post the same question on multiple sites](https://meta.stackexchange.com/q/64068). – D.W. May 26 '23 at 06:01

0 Answers0