Is there a way to have a lambda regularizer value on the constraints in the ThresholdOptimizer? For instance if we want to create accuracy vs SPD curves I want to have different thresholds enforced on the SPD/accuracy constraints that would indicate their importance (maybe initially accuracy is more important then gradually SPD gains importance).
1 Answers
Fairlearn maintainer here! [I can't comment on StackOverflow, so sadly these clarifying questions need to be in an "answer", but I'll update it once I understand your concern.]
What do you mean by SPD?
Can you describe a use case where it's clear what you mean by "initially accuracy is more important, then gradually SPD gains importance"? ThresholdOptimizer
currently only supports the case where you satisfy your constraints 100%. One could think of ways to extend this to have some tolerance in constraint violation to improve the accuracy (or other performance measures).
You might have come across the built-in charts fairlearn
provides for ThresholdOptimizer
: https://fairlearn.org/v0.6.1/api_reference/fairlearn.postprocessing.html#fairlearn.postprocessing.plot_threshold_optimizer
The chart depends on your constraint, of course, but those may prove to be helpful in explaining how it arrived at the threshold(s).
If you have a concrete feature request feel free to open an issue directly in the repository as well! Thanks!

- 48
- 7
-
Thank you for the answer. By SPD I meant statistical parity difference (although it can be any other fairness measure as well). And exactly, I think you understood my question. I meant the exact thing as in extending the work on violating the constraint such that sometimes accuracy is more important sometimes fairness (something like the Pareto curves). For now I also used the interpolation_dict and _tradeoff_curve to reason about some of these thersholdings and tradeoffs as you suggested; however, I am not sure if this is the best way to go around solving this problem. – anon Apr 29 '21 at 21:44
-
Thanks for confirming! I think it's an excellent question. If you want to pursue this further I'd suggest opening an issue on the [Fairlearn repository on GitHub](https://github.com/fairlearn/fairlearn) since it likely involves making changes to the existing code. I would be happy to help think through this sort of task. If you're looking for a quicker way to get this I would suggest using something like `fairlearn.reductions.GridSearch`. – Roman Lutz May 06 '21 at 22:35
-
It outputs a whole bunch of models, and the best of them lie on the pareto curve showing the best trade-offs between the performance and fairness metrics of your choice. The plot in this section is an example that was generated with `GridSearch`. To get a full example I'd suggest looking at the usage of `GridSearch` in this notebook: https://fairlearn.org/v0.6.1/auto_examples/plot_grid_search_census.html – Roman Lutz May 06 '21 at 22:39