-1

I'm using StructuredDataClassifier to train a model and I encounter the following error after a few trials.

Trial 3 Complete \[00h 00m 23s\]  
val_accuracy: 0.9289383292198181

Best val_accuracy So Far: 0.9289383292198181  
Total elapsed time: 00h 01m 02s

Search: Running Trial #4

Value             |Best Value So Far |Hyperparameter  
True              |True              |structured_data_block_1/normalize  
False             |False             |structured_data_block_1/dense_block_1/use_batchnorm  
2                 |2                 |structured_data_block_1/dense_block_1/num_layers  
32                |32                |structured_data_block_1/dense_block_1/units_0  
0                 |0                 |structured_data_block_1/dense_block_1/dropout  
32                |32                |structured_data_block_1/dense_block_1/units_1  
0                 |0                 |classification_head_1/dropout  
adam              |adam              |optimizer  
0\.01              |0.001             |learning_rate

Epoch 1/1000  
148/148 \[==============================\] - 2s 9ms/step - loss: 0.1917 - accuracy: 0.9576 - val_loss: 0.5483 - val_accuracy: 0.9289  
Epoch 2/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1572 - accuracy: 0.9628 - val_loss: 0.3410 - val_accuracy: 0.9289  
Epoch 3/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1434 - accuracy: 0.9628 - val_loss: 0.3330 - val_accuracy: 0.9289  
Epoch 4/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1414 - accuracy: 0.9628 - val_loss: 0.3014 - val_accuracy: 0.9289  
Epoch 5/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1395 - accuracy: 0.9628 - val_loss: 0.3012 - val_accuracy: 0.9289  
Epoch 6/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1334 - accuracy: 0.9628 - val_loss: 0.4439 - val_accuracy: 0.9289  
Epoch 7/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1370 - accuracy: 0.9628 - val_loss: 0.2964 - val_accuracy: 0.9289  
Epoch 8/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1309 - accuracy: 0.9628 - val_loss: 0.2949 - val_accuracy: 0.9289  
Epoch 9/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1282 - accuracy: 0.9628 - val_loss: 0.2927 - val_accuracy: 0.9289  
Epoch 10/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1301 - accuracy: 0.9628 - val_loss: 0.2937 - val_accuracy: 0.9289  
Epoch 11/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1278 - accuracy: 0.9628 - val_loss: 0.3152 - val_accuracy: 0.9289  
Epoch 12/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1270 - accuracy: 0.9628 - val_loss: 0.3062 - val_accuracy: 0.9289  
Epoch 13/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1286 - accuracy: 0.9628 - val_loss: 0.3198 - val_accuracy: 0.9289  
Epoch 14/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1268 - accuracy: 0.9628 - val_loss: 0.3318 - val_accuracy: 0.9289  
Epoch 15/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1244 - accuracy: 0.9628 - val_loss: 0.3038 - val_accuracy: 0.9289  
Epoch 16/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1239 - accuracy: 0.9628 - val_loss: 0.3050 - val_accuracy: 0.9289  
Epoch 17/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1222 - accuracy: 0.9628 - val_loss: 0.3180 - val_accuracy: 0.9289  
Epoch 18/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1239 - accuracy: 0.9628 - val_loss: 0.3298 - val_accuracy: 0.9289  
Epoch 19/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1220 - accuracy: 0.9628 - val_loss: 0.2916 - val_accuracy: 0.9289  
Epoch 20/1000  
148/148 \[==============================\] - 1s 8ms/step - loss: 0.1203 - accuracy: 0.9630 - val_loss: 0.3548 - val_accuracy: 0.9289
Epoch 21/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1243 - accuracy: 0.9628 - val_loss: 0.3047 - val_accuracy: 0.9289
Epoch 22/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1208 - accuracy: 0.9633 - val_loss: 0.4035 - val_accuracy: 0.9289
Epoch 23/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1242 - accuracy: 0.9628 - val_loss: 0.3383 - val_accuracy: 0.9289
Epoch 24/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1181 - accuracy: 0.9635 - val_loss: 0.3576 - val_accuracy: 0.9289
Epoch 25/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1171 - accuracy: 0.9641 - val_loss: 0.3221 - val_accuracy: 0.9289
Epoch 26/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1149 - accuracy: 0.9635 - val_loss: 0.3314 - val_accuracy: 0.9289
Epoch 27/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1136 - accuracy: 0.9635 - val_loss: 0.3554 - val_accuracy: 0.9289
Epoch 28/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1196 - accuracy: 0.9633 - val_loss: 0.3311 - val_accuracy: 0.9289
Epoch 29/1000
148/148 [==============================] - 1s 8ms/step - loss: 0.1176 - accuracy: 0.9635 - val_loss: 0.3684 - val_accuracy: 0.9289
Trial 4 Complete [00h 00m 36s]
val_accuracy: 0.9289383292198181

Best val_accuracy So Far: 0.9289383292198181
Total elapsed time: 00h 01m 37s

Search: Running Trial #5

Value             |Best Value So Far |Hyperparameter
True              |True              |structured_data_block_1/normalize 
False             |False             |structured_data_block_1/dense_block_1/use_batchnorm
2                 |2                 |structured_data_block_1/dense_block_1/num_layers
32                |32                |structured_data_block_1/dense_block_1/units_0
0                 |0                 |structured_data_block_1/dense_block_1/dropout
32                |32                |structured_data_block_1/dense_block_1/units_1
0                 |0                 |classification_head_1/dropout
adam_weight_decay |adam              |optimizer
0.001             |0.001             |learning_rate

Epoch 1/1000
2022-12-11 16:22:23.607384: W tensorflow/core/framework/op_kernel.cc:1807] OP_REQUIRES failed at cast_op.cc:121 : UNIMPLEMENTED: Cast string to
 float is not supported
2022-12-11 16:22:23.607506: W tensorflow/core/framework/op_kernel.cc:1807] OP_REQUIRES failed at cast_op.cc:121 : UNIMPLEMENTED: Cast string to
 float is not supported
Traceback (most recent call last):
  File "/home/anand/automl/automl.py", line 30, in <module>
    clf.fit(x=X_train, y=y_train, use_multiprocessing=True, workers=8, verbose=True)
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/tasks/structured_data.py", line 326, in fit
    history = super().fit(
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/tasks/structured_data.py", line 139, in fit
    history = super().fit(
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/auto_model.py", line 292, in fit
    history = self.tuner.search(
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/engine/tuner.py", line 193, in search
    super().search(
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py", line 183, in search
    results = self.run_trial(trial, *fit_args, **fit_kwargs)
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras_tuner/engine/tuner.py", line 295, in run_trial
    obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/engine/tuner.py", line 101, in _build_and_fit_model
    _, history = utils.fit_with_adaptive_batch_size(
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/utils/utils.py", line 88, in fit_with_adaptive_batch_size
    history = run_with_adaptive_batch_size(
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/utils/utils.py", line 101, in run_with_adaptive_batch_size
    history = func(x=x, validation_data=validation_data, **fit_kwargs)
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/utils/utils.py", line 89, in <lambda>
    batch_size, lambda **kwargs: model.fit(**kwargs), **fit_kwargs
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 70, in error_handler
    raise e.with_traceback(filtered_tb) from None
  File "/home/anand/automl/.venv/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 52, in quick_execute
    tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
tensorflow.python.framework.errors_impl.UnimplementedError: Graph execution error:

Detected at node 'Cast_1' defined at (most recent call last):
    File "/home/anand/automl/automl.py", line 30, in <module>
      clf.fit(x=X_train, y=y_train, use_multiprocessing=True, workers=8, verbose=True)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/tasks/structured_data.py", line 326, in fit
      history = super().fit(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/tasks/structured_data.py", line 139, in fit
      history = super().fit(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/auto_model.py", line 292, in fit
      history = self.tuner.search(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/engine/tuner.py", line 193, in search
      super().search(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras_tuner/engine/base_tuner.py", line 183, in search
      results = self.run_trial(trial, *fit_args, **fit_kwargs)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras_tuner/engine/tuner.py", line 295, in run_trial
      obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/engine/tuner.py", line 101, in _build_and_fit_model
      _, history = utils.fit_with_adaptive_batch_size(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/utils/utils.py", line 88, in fit_with_adaptive_batch_size
      history = run_with_adaptive_batch_size(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/utils/utils.py", line 101, in run_with_adaptive_batch_size
      history = func(x=x, validation_data=validation_data, **fit_kwargs)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/utils/utils.py", line 89, in <lambda>
      batch_size, lambda **kwargs: model.fit(**kwargs), **fit_kwargs
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/utils/traceback_utils.py", line 65, in error_handler
      return fn(*args, **kwargs)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/engine/training.py", line 1650, in fit
      tmp_logs = self.train_function(iterator)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/engine/training.py", line 1249, in train_function
      return step_function(self, iterator)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/engine/training.py", line 1233, in step_function
      outputs = model.distribute_strategy.run(run_step, args=(data,))
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/engine/training.py", line 1222, in run_step
      outputs = model.train_step(data)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/engine/training.py", line 1027, in train_step
      self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 527, in minimize
      self.apply_gradients(grads_and_vars)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/autokeras/keras_layers.py", line 360, in apply_gradients
      return super(AdamWeightDecay, self).apply_gradients(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1140, in apply_grad
ients
      return super().apply_gradients(grads_and_vars, name=name)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 632, in apply_gradi
ents
      self._apply_weight_decay(trainable_variables)
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1159, in _apply_wei
ght_decay
      tf.__internal__.distribute.interim.maybe_merge_call(
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1155, in distribute
d_apply_weight_decay
      distribution.extended.update( 
    File "/home/anand/automl/.venv/lib/python3.10/site-packages/keras/optimizers/optimizer_experimental/optimizer.py", line 1151, in weight_dec
ay_fn
      wd = tf.cast(self.weight_decay, variable.dtype)
Node: 'Cast_1'
2 root error(s) found.
  (0) UNIMPLEMENTED:  Cast string to float is not supported
         [[{{node Cast_1}}]]
  (1) CANCELLED:  Function was cancelled before it was started
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_70943]

This is my Python code

import tensorflow as tf
import pandas as pd
import numpy as np
import autokeras as ak
from sklearn.model_selection import LeaveOneGroupOut
from sklearn.metrics import classification_report, confusion_matrix
from sklearn.preprocessing import LabelEncoder

data = pd.read_csv("p_feature_df.csv")
y = data.pop('is_p')
y = y.astype(np.int32)
data.pop('idx')
groups = data.pop('owner')
data = data.astype(np.float32)
X = data.to_numpy()

lb = LabelEncoder()
y = lb.fit_transform(y)

logo = LeaveOneGroupOut()
logo.get_n_splits(X,y,groups)


results = []
models = []
for train_index, test_index in logo.split(X,y,groups):
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]
    clf = ak.StructuredDataClassifier(overwrite=True)
    clf.fit(x=X_train, y=y_train, use_multiprocessing=True, workers=8, verbose=True)
    loss, acc = clf.evaluate(x=X_test, y=y_test, verbose=True)
    results.append( (loss, acc))
    models.append(clf)
    print( (loss, acc) )`

The code fails when adam_weight_decay is used.

Jim Garrison
  • 85,615
  • 20
  • 155
  • 190
Shakuni
  • 11

2 Answers2

0

Same issue here I think it's related to some download that autokeras made in colab and in local pc didn't The file is:

https://storage.googleapis.com/tensorflow/keras-applications/resnet/resnet50v2_weights_tf_dim_ordering_tf_kernels_notop.h5
0

I had the same problem. I solved it by creation of an AutoModel (Baseclass) like:

    input_node = ak.StructuredDataInput()
    output_node = ak.DenseBlock(use_batchnorm=True)(input_node)
    output_node = ak.DenseBlock(dropout=0.1)(output_node)
    output_node = ak.DenseBlock(use_batchnorm=True)(input_node)
    output_node = ak.ClassificationHead()(output_node)
    
    
    clf = ak.AutoModel(
        inputs=input_node, outputs=output_node, overwrite=True        )

It seems to be a bug in StructuredDataClassifier