IBM has added new functionalities to its AI Fairness 360 toolkit (AIF360) to make it even more accessible for a wider range of developers. Released in 2018, the AIF360 is an extensible, open source toolkit that can help you examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle.
Featuring over 70 fairness metrics, the toolkit also includes 11 bias mitigation algorithms developed by the research community. According to IBM, it is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education.
IBM has now added two new ways to increase AIF360’s functionality: compatibility with scikit-learn and R.
The AI Fairness 360 R package includes: a set of metrics for data sets and models to test for biases as well as algorithms to mitigate bias in data sets and models.
With the AI Fairness 360 R package, R users will now be able to test for bias in their training data with a range of different metrics.
Additionally, the latest version release of AIF360, 0.3.0, comes with the new aif360.sklearn module. This is where you can find all the currently completed scikit-learn-compatible AIF360 functionality.
As this is still a work-in-progress, IBM aims to make AIF360 functionality interchangeable with scikit-learn functionality. “Algorithms can be swapped with debiasing algorithms and metrics can be swapped with fairness metrics,” according to a blog post.
Further, it is now possible to use AIF360 to detect bias issues at training time from Watson Studio and Watson OpenScale to detect metrics at runtime.