1

Im trying to classify mushrooms using this dataset. Im using tf.keras.utils.image_dataset_from_directory to import my dataset

How I handle the dataset

DATADIR = "/content/drive/MyDrive/Mushrooms/dataset"


data_dir = pathlib.Path(DATADIR).with_suffix('')

batch_size = 32
img_height = 200
img_width = 200

training_ds = tf.keras.utils.image_dataset_from_directory(
    data_dir,
    validation_split = 0.2,
    subset = "training",
    seed = 555,
    image_size = (img_height, img_width),
    batch_size = batch_size
)

validation_ds = tf.keras.utils.image_dataset_from_directory(
    data_dir,
    validation_split = 0.2,
    subset = "validation",
    seed = 555,
    image_size = (img_height, img_width),
    batch_size = batch_size
)

class_names = training_ds.class_names
print(class_names)

image_shape = []
for image_batch, labels_batch in training_ds:
  image_shape = image_batch.shape
  print(image_batch.shape)
  print(labels_batch.shape)
  break


print(image_shape)

AUTOTUNE = tf.data.AUTOTUNE
train_ds = training_ds.cache().prefetch(buffer_size = AUTOTUNE)
validation_ds = validation_ds.cache().prefetch(buffer_size = AUTOTUNE)



How I train the model

from tensorflow.keras.layers import Rescaling, Conv2D , MaxPooling2D , Flatten, Dense, Dropout

model = tf.keras.Sequential([
  Rescaling(1./255),
  Conv2D(256,3,activation = 'relu'),
  MaxPooling2D(pool_size =(2,2)),
  Conv2D(128,3,activation = 'relu'),
  MaxPooling2D(pool_size =(2,2)),
  Conv2D(64,3,activation = 'relu'),
  MaxPooling2D(pool_size =(2,2)),
  Flatten(),
  Dense(128, activation='relu'),
  Dropout(0.5),
  Dense(number_of_classes,activation = 'softmax')

])

model.build(input_shape = (image_shape[0],image_shape[1],image_shape[2],image_shape[3]))

model.compile(
  optimizer = 'adam',
  loss = tf.keras.losses.SparseCategoricalCrossentropy(),
  metrics=['sparse_categorical_accuracy']
)

model.summary()

model.fit(training_ds, validation_data=validation_ds,epochs = 3,use_multiprocessing = True)


I am currently getting the error

InvalidArgumentError: Graph execution error:

2 root error(s) found.
  (0) INVALID_ARGUMENT:  jpeg::Uncompress failed. Invalid JPEG data or crop window.
     [[{{node decode_image/DecodeImage}}]]
     [[IteratorGetNext]]
     [[IteratorGetNext/_4]]
  (1) INVALID_ARGUMENT:  jpeg::Uncompress failed. Invalid JPEG data or crop window.
     [[{{node decode_image/DecodeImage}}]]
     [[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_3271]


I have tried reinstalling the dataset, since I thought the dataset was corrupt. I think it is some sort of resizing issue. I have got this dataset working by converting the images to a numpy array, but I wanted to try work with the images directly.

0 Answers0