Deepchecks: A Library for Testing and Validating Machine Learning Models and Data

Shir Chorev, Philip Tannor, Dan Ben Israel, Noam Bressler, Itay Gabbay, Nir Hutnik, Jonatan Liberman, Matan Perlmutter, Yurii Romanyshyn, Lior Rokach

Research output: Contribution to journalArticlepeer-review

6 Scopus citations

Abstract

This paper presents Deepchecks, a Python library for comprehensively validating machine learning models and data. Our goal is to provide an easy-to-use library comprising many checks related to various issues, such as model predictive performance, data integrity, data distribution mismatches, and more. The package is distributed under the GNU Affero General Public License (AGPL) and relies on core libraries from the scientific Python ecosystem: scikit-learn, PyTorch, NumPy, pandas, and SciPy. Source code, documentation, examples, and an extensive user guide can be found at https://github.com/deepchecks/deepchecks and https://docs.deepchecks.com/.

Original languageEnglish
Article number285
Pages (from-to)12990-12995
JournalJournal of Machine Learning Research
Volume23
Issue number1
StatePublished - 1 Jan 2022

Keywords

  • Bias
  • Concept Drift
  • Data Leakage
  • Explainable AI (XAI)
  • MLOps
  • Python
  • Supervised Learning
  • Testing Machine Learning

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Statistics and Probability
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Deepchecks: A Library for Testing and Validating Machine Learning Models and Data'. Together they form a unique fingerprint.

Cite this