I am trying to transfer-learn a pretrained MobileNet Model on a c5.large instance (AWS).
I am first training (burn-in) the last dense layer for a couple of epochs (tried between 5-20, does not seem to matter a whole lot).
After the burn-in period, I want to train the full model. However, this stops after a couple of epochs without an error.
Earlier I was trying without the burn-in period and that worked "fine-ish". Would typically crash the server after ~50 epochs (which is why I added the clipnorm, which did help a bit).
Any ideas on how to debug this are welcome.
Console Output:
Total params: 3,239,114
Trainable params: 3,217,226
Non-trainable params: 21,888
_________________________________________________________________
Epoch 6/25
1/46 [..............................] - ETA: 9:22 - loss: 0.2123
2/46 [>.............................] - ETA: 7:46 - loss: 0.2028ubuntu@ip-XXX:~$ ls
Training Code:
base_model = _mobilenet.MobileNet(
input_shape=(224, 224, 3), include_top=False, pooling="avg"
)
if not options.mobile_net_weights:
pretrained_weights = os.path.join(
os.path.dirname(pretrained.__file__), "weights_mobilenet_aesthetic_0.07.hdf5"
)
base_model.load_weights(pretrained_weights, by_name=True)
# add dropout and dense layer
x = Dropout(0.6)(base_model.output)
x = Dense(units=classes, activation=last_activation)(x)
pretrained_model = Model(base_model.inputs, x)
# start training only dense layers
for layer in base_model.layers:
layer.trainable = False
pretrained_model.compile(loss=loss, optimizer=Adam(lr=0.001, decay=0, clipnorm=1.0))
pretrained_model.summary()
# add path equal to image_id
labels = [dict(item, **{"path": item["image_id"]}) for item in load_json(labels_path)]
training, validation = train_test_split(labels, test_size=0.05, shuffle=True)
train_data_gen = _DataGenerator(
training,
batch_size=options.batch_size,
base_dir=options.image_path,
n_classes=classes,
basenet_preprocess=_mobilenet.preprocess_input,
)
validation_data_gen = _DataGenerator(
validation,
batch_size=options.batch_size,
base_dir=options.image_path,
n_classes=classes,
basenet_preprocess=_mobilenet.preprocess_input,
training=False,
)
train_job_dir = f"train_jobs/{datetime.datetime.now().isoformat()}"
train_job_dir = os.path.join(options.results_path, train_job_dir)
tensorboard = TensorBoardBatch(log_dir=os.path.join(train_job_dir, "logs"))
model_save_name = "weights_{epoch:02d}_{val_loss:.3f}.hdf5"
model_file_path = os.path.join(train_job_dir, "weights", model_save_name)
if not os.path.exists(os.path.join(train_job_dir, "weights")):
os.makedirs(os.path.join(train_job_dir, "weights"))
model_checkpointer = ModelCheckpoint(
filepath=model_file_path,
monitor="val_loss",
verbose=1,
save_best_only=True,
save_weights_only=True,
)
pretrained_model.fit_generator(
train_data_gen,
steps_per_epoch=len(training) / options.batch_size / 10,
epochs=5,
verbose=1,
callbacks=[tensorboard, model_checkpointer],
validation_data=validation_data_gen,
validation_steps=len(validation) / options.batch_size,
)
# start training all layers
for layer in base_model.layers:
layer.trainable = True
pretrained_model.compile(
loss=loss, optimizer=Adam(lr=0.0001, decay=0.000023, clipnorm=1.0)
)
pretrained_model.summary()
pretrained_model.fit_generator(
train_data_gen,
steps_per_epoch=len(training) / options.batch_size / 10,
epochs=25,
initial_epoch=5,
verbose=1,
callbacks=[tensorboard, model_checkpointer],
validation_data=validation_data_gen,
validation_steps=len(validation) / options.batch_size,
)
Update and followup
The original problem seemed to have been caused by too little available memory on the machine. I do have a somehow unrelated, yet related question though. When trying to use GPU acceleration I have been banging my head against the wall, as I can't seem to get it working.
Is there any good (logically structured and easy to follow) information out there how one would use:
- Docker on a local machine (to build a GPU-accelerated enabled image)
- Install all the relevant (nvidia-)drivers on the GPU instance (what an insane version chaos)
- Run the Docker container (nvidia-docker2, nvidia-docker or --runtime==nvidia ?? )
- What the hell is Cuda and why do I need it?
- Some sources that I found suggested to run Cuda in Docker, why?
When I seemed like I got some of it working (i.e. set up drivers, some version) and had managed to build a GPU-enabled (i.e. tensorflow-gpu) Docker image I got this error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:430: container init caused \"process_linux.go:413: running prestart hook 1 caused \\"error running hook: exit status 1, stdout: , stderr: exec command: [/usr/bin/nvidia-container-cli --load-kmods configure --ldconfig=@/sbin/ldconfig.real --device=all --compute --utility --require=cuda>=10.0 brand=tesla,driver>=384,driver<385 brand=tesla,driver>=410,driver<411 --pid=2113 /var/lib/docker/overlay2/4bf49d2555c40278b3249f73bf3d33484181f51b374b77b69a474fc39e37441b/merged]\\nnvidia-container-cli: requirement error: unsatisfied condition: driver >= 410\\n\\"\"": unknown.