I am using Keras to tell the difference between background and signal.
A Background Example A Second Background Example A Signal Example
The model used here is
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(472,696)),#flatten the data
tf.keras.layers.Dense(128, activation='relu'),#decide how many neurons to use
tf.keras.layers.Dense(2, activation = 'sigmoid')])#how many classes you have
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'])
My training is
model.fit(train_images, train_labels, epochs=6)#train with data
Often I just get a local minimum with accuracy approximately as 0.5.
Epoch 1/6 - loss: 45.3655 - accuracy: 0.4833
Epoch 2/6 - loss: 0.6934 - accuracy: 0.4855
Epoch 3/6 - loss: 0.6932 - accuracy: 0.4959
Epoch 4/6 - loss: 0.6931 - accuracy: 0.5145
Epoch 5/6 - loss: 0.6930 - accuracy: 0.5145
Epoch 6/6 - loss: 0.6929 - accuracy: 0.5145
Am I using the correct model?
I have tried to change the acitivation from "relu" to "LeakyReLU" and change optimizer to "sgd". But no significant improvement.