Ethics In Tech

By Brett Wilkins

IBM is providing insight into how artificial intelligence makes decisions, part of an effort to demystify AI and tackle the problem of bias in machine learning.

IoT For All reports IBM’s new open-source AI Fairness 360 toolkit is able to both search for and mitigate bias in AI models, allowing artificial intelligence algorithms to explain the process by which they make decisions. The computing giant hopes this will allow both researchers and enterprise AI architects to better understand how artificial intelligence arrives at the decisions it does.

While many AI experts have long warned about the potential for algorithmic bias, a reflection of human programmers’ prejudices, IBM claims to have cracked what’s known as the “black box” of AI — the virtual space where machine learning algorithms learn from information and make their predictions — using software that delivers greater AI transparency.

IBM’s technology automatically detects bias and offers real-time explanations as decisions are made. The company previously utilized AI to aid in decision-making with its IBM Watson computer, which offered physicians and other medical professionals and researchers evidence-based treatment plans that combined automated care management and patient engagement. However, some experts criticized the model because it did not explain the decision-making process.

Other computer scientists have also developed promising methods for interpreting machine learning algorithms. Researchers at the University of Maryland have taken a novel approach to the problem; instead of trying to “break” algorithms by removing certain key words from inputs to generate incorrect answers, the Maryland group eliminated all but a few critical inputs necessary to produce the correct answer. Results have been remarkable — the researchers claim they have yielded correct answers with an average input of just three words.

“Black-box models do seem to work better than simpler models, such as decision trees, but even the people who wrote the initial code can’t tell exactly what is happening,” Jordan Boyd-Graber, the senior author of the study and an associate professor of computer science at UMD, told Science Daily. “When these models return incorrect or nonsensical answers, it’s tough to figure out why. So instead, we tried to find the minimal input that would yield the correct result. The average input was about three words, but we could get it down to a single word in some cases.”

In addition to AI Fairness 360, IBM Research has also rolled out an open-source AI bias detection and mitigation toolkit which it hopes will offer academics and data scientists better tools for the integration of bias detection with their machine learning models.

“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” David Kenny, IBM’s SVP of Cognitive Solutions, said in a press release. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”

Photo credit: Mike MacKenzie/Flickr Creative Commons (www.vpnsrus.com)


0 Comments

Leave a Reply

Avatar placeholder
Please subscribe to receive the latest newsletter!
We respect your privacy.