Abstract
The use of automated data-driven tools for decision-making has gained popularity in recent years. At the same time, the reported cases of algorithmic bias and discrimination increase as well, which in turn lead to an extensive study of algorithmic fairness. Numerous notions of fairness have been proposed, designed to capture different scenarios. These measures typically refer to a “protected group” in the data, defined using values of some sensitive attributes. Confirming whether a fairness definition holds for a given group is a simple task, but detecting groups that are treated unfairly by the algorithm may be computationally prohibitive as the number of possible groups is combinatorial. We present a method for detecting such groups efficiently for various fairness definitions. Our solution is implemented in a system called DENOUNCER, an interactive system that allows users to explore different fairness measures of a (trained) classifier for a given test data. We propose to demonstrate the usefulness of DENOUNCER using real-life data and illustrate the effectiveness of our method.
Original language | English |
---|---|
Pages (from-to) | 2719-2722 |
Number of pages | 4 |
Journal | Proceedings of the VLDB Endowment |
Volume | 14 |
Issue number | 12 |
DOIs | |
State | Published - 1 Jan 2021 |
Externally published | Yes |
Event | 47th International Conference on Very Large Data Bases, VLDB 2021 - Virtual, Online Duration: 16 Aug 2021 → 20 Aug 2021 |
ASJC Scopus subject areas
- Computer Science (miscellaneous)
- General Computer Science