Source code/webpage/demos for the What-If Tool
-
Updated
Aug 20, 2025 - HTML
Source code/webpage/demos for the What-If Tool
Sample project using IBM's AI Fairness 360 is an open source toolkit for determining, examining, and mitigating discrimination and bias in machine learning (ML) models throughout the AI application lifecycle.
📊 Analyze fairness in Machine Learning models using the Pima Diabetes dataset, featuring metrics, visualizations, and comprehensive reports for informed decision-making.
Analise de Fairness em ML usando metrica ABLNI com dataset Pima Diabetes - SDK completo com visualizacoes e relatorios
Tools to assess fairness and mitigate unfairness in sociolinguistic auto-coding
Deep-Learning approach for generating Fair and Accurate Input Representation for crime rate estimation in continuous protected attributes and continuous targets.
Fairness and bias detection library for Elixir AI/ML systems
Add a description, image, and links to the ml-fairness topic page so that developers can more easily learn about it.
To associate your repository with the ml-fairness topic, visit your repo's landing page and select "manage topics."