1

The Code is as follows:-

from tensorflow.keras.applications.mobilenet import preprocess_input

**Import of Datasets**

from tensorflow.keras.datasets import cifar10

Normalize Images by dividing pixles by 255

(train_images, train_labels), (test_images, test_labels) = cifar10.load_data()

Normalize pixel values to be between 0 and 1

train_images, test_images = train_images / 255.0, test_images / 255.0

Convert Labels to Categories

class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
for i in range(25):  
    plt.subplot(5,5,i+1)  
    plt.xticks([])
    plt.yticks([])
    plt.grid(False)
    plt.imshow(train_images[i])
    plt.xlabel(class_names[train_labels[i][0]])
    plt.show()    

Creating Convolution Base

     m1 = Sequential()
     m1.add(layers.Conv2D(128, (3, 3), activation='relu', input_shape=(256,256,3)))
     m1.add(layers.MaxPooling2D((2, 2)))
     m1.add(layers.Conv2D(64, (3, 3), activation='relu'))
     m1.add(layers.MaxPooling2D((2, 2)))
     m1.add(layers.Conv2D(32, (3, 3), activation='relu'))

     m1.summary()
**Add Dense Layer On Top**
    m1.add(layers.Flatten())
    m1.add(layers.Dense(64, activation='relu'))
    m1.add(layers.Dense(10))
    m1.summary()

Compile And Train The CNN Architechture`

After this Step the Error is Occuring

 m1 = Sequential()
m1.add(Conv2D(128,(3,3),activation='relu',input_shape=(256,256,3)))
m1.add(MaxPooling2D(pool_size=(2,2)))

m1.add(Conv2D(64,(3,3),activation='relu'))           
m1.add(MaxPooling2D(pool_size=(2,2)))


m1.add(Conv2D(64,(3,3),activation='relu'))           
m1.add(MaxPooling2D(pool_size=(2,2)))

m1.add(Flatten())
m1.add(Dense(32,activation='relu'))
m1.add(Dense(16,activation='relu'))
m1.add(Dense(10,activation='softmax'))             

m1.compile(optimizer='adam',loss='sparse_categorical_crossentropy',metrics=['acc'])   # multiclass and therefore categorical_crossentropy

h1 = m1.fit(x_train,y_train,validation_data=(x_test,y_test),epochs=10)
John Harrington
  • 1,314
  • 12
  • 36
arjun
  • 11
  • 1
  • And now you want to ask why shape (None, 32, 32, 3) is not the same as shape (None, 256, 256, 3)? – mkrieger1 Dec 25 '22 at 15:26
  • Actually i am quite new to this and wanted to know the issue – arjun Dec 25 '22 at 15:40
  • I am facing this error :- Input 0 of layer "sequential_17" is incompatible with the layer: expected shape=(None, 256, 256, 3), found shape=(None, 32, 32, 3) – arjun Dec 25 '22 at 15:42

1 Answers1

0

In the sequential model, you are passing the input_shape as (256,256,3) in the first layer as shown below.

m1.add(layers.Conv2D(128, (3, 3), activation='relu', input_shape=(256,256,3)))

This means you are instructing your model to take images that are 256 pixels in height as well as width and have 3 channels namely RGB.

But, pay attention now, you are passing cifar10 dataset to your model. Their shape is (32,32,3). Hence the error. Image size given and the size expected by model dont match.

Do the following change in your first layer of model i.e.

m1.add(layers.Conv2D(128, (3, 3), activation='relu', input_shape=(32,32,3)))

This should remove this error.

In the error you will notice a None in shape i.e.None,256,256,3. It signifies batch size. Models can take images in batch mode. They can take more than one image for training and prediction. Since batch size is not fixed for anyone, it is represented as None. It is left to the developer like you and me to decide whatever batch fits in our CPU or GPU.

MSS
  • 3,306
  • 1
  • 19
  • 50