Is this fair?
Is this fair?
The same model parameters can result in behavior that feels totally benign when situated in one context and deeply unjust in another.
Proposal: The most performant model is the most fair model.
Apply calibration to correct for correlated errors.
Perhaps this is better?
Let’s plot the errors:
…let’s not forget those y-axis units.
Let’s plot the errors:
…let’s not forget those y-axis units.
Definitions of fairness “are not mathematically or morally compatible in general.”\(^1\)
Mitchell et al. 2021
How will these predictions even be used, though?
Metrics evaluate the model, but the model is one part of a larger system.
Choose tools that support thinking about the hard parts.
github.com/simonpcouch/cascadia-24