I'd like to use fairlearn to encode a binned monotonicity constraint on a binned continuous feature, e.g. income. That is, for input x, model h, and income groups {G_1...G_k}, I'd like to enforce:
E[h(x) | x \in G_i] <= E[h(x) | x \in G_i+1] for i from 0 to k-1.
This constraint fits into the form required in the Fair Reductions paper, where we have our vector mu, which is
mu_j=E[h(x)|x \in G_j] for all j,
and our matrix M which has M_i,i=1 and M_i,(i+1)=-1, and zeros elsewhere.
I was about to try to make an implementation of the Moment class to make a moment for this binned monotonicity constraint, but the documentation for other parts of the code has me worried that fairlearn is only implemented for parity constraints in particular.
For example, in the documentation the _Lagrangian class fairlearn, in fairlearn/fairlearn/reductions/_exponentiated_gradient/_lagrangian.py, it says:
"constraints : fairlearn.reductions.Moment Object describing the parity constraints. This provides the reweighting and relabelling."
Does this mean that even if find a way to write a moment class for binned monotonicity, I may still run into problems from the rest of the code being geared towards parity constraints only?
Additionally, is there any hope (or existing implementation?) of getting a general implementation of fairlearn where you input mu and M, as opposed to a pre-determined constraint?