I would like to use an alternating update rule with keras. I.e. per-batch I would like to call a regular gradient-based step, and next call a custom step.
I thought about implementing it by either inheriting an optimizer or a callback (and use the on-batch calls). However, neither would do, because they both lack the batch-data and batch-labels (and I need both).
Any idea on how to implement a custom alternating update with keras?
If required, I don't mind directly calling tensorflow specific methods, as long as I can keep use the project wrapped with the keras framework (with model.fit, model.predict .. )