Denouncer: Detection of unfairness in classifiers

Jinyang Li, Yuval Moskovitch, H. V. Jagadish

Research output: Contribution to journalConference articlepeer-review

9 Scopus citations

Abstract

The use of automated data-driven tools for decision-making has gained popularity in recent years. At the same time, the reported cases of algorithmic bias and discrimination increase as well, which in turn lead to an extensive study of algorithmic fairness. Numerous notions of fairness have been proposed, designed to capture different scenarios. These measures typically refer to a “protected group” in the data, defined using values of some sensitive attributes. Confirming whether a fairness definition holds for a given group is a simple task, but detecting groups that are treated unfairly by the algorithm may be computationally prohibitive as the number of possible groups is combinatorial. We present a method for detecting such groups efficiently for various fairness definitions. Our solution is implemented in a system called DENOUNCER, an interactive system that allows users to explore different fairness measures of a (trained) classifier for a given test data. We propose to demonstrate the usefulness of DENOUNCER using real-life data and illustrate the effectiveness of our method.

Original languageEnglish
Pages (from-to)2719-2722
Number of pages4
JournalProceedings of the VLDB Endowment
Volume14
Issue number12
DOIs
StatePublished - 1 Jan 2021
Externally publishedYes
Event47th International Conference on Very Large Data Bases, VLDB 2021 - Virtual, Online
Duration: 16 Aug 202120 Aug 2021

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • General Computer Science

Fingerprint

Dive into the research topics of 'Denouncer: Detection of unfairness in classifiers'. Together they form a unique fingerprint.

Cite this