Detect Bias & Ensure Fairness in Domino

Detect Bias & Ensure Fairness in Domino

This session will utilize the IBM AI Fairness 360 library to detect bias in data and models, and produce a "fair" model in Domino.

rate limit

Code not recognized.

About this course

Using open-source libraries to detect bias and ensure fairness with Domino

By the end of this customer tech hour, you will be able to:

  • Recognize many ways of measuring “bias” and “fairness” in data science.
  • Identify bias/fairness metrics that make sense for your context.
  • Detect bias in data and in models with the IBM AI Fairness 360 library.
  • Produce a “fair” model using the IBM AI Fairness 360 library.
  • Illustrate how Domino’s governance features make Responsible AI, detecting bias, and ensuring fairness easier.

About this course

Using open-source libraries to detect bias and ensure fairness with Domino

By the end of this customer tech hour, you will be able to:

  • Recognize many ways of measuring “bias” and “fairness” in data science.
  • Identify bias/fairness metrics that make sense for your context.
  • Detect bias in data and in models with the IBM AI Fairness 360 library.
  • Produce a “fair” model using the IBM AI Fairness 360 library.
  • Illustrate how Domino’s governance features make Responsible AI, detecting bias, and ensuring fairness easier.