IBM has launched a software service that scans AI systems as they work in order to detect bias and provide explanations for the automated decisions being made — a degree of transparency that may be necessary for compliance purposes not just a company’s own due diligence.
The new trust and transparency system runs on the IBM cloud and works with models built from what IBM bills as a wide variety of popular machine learning frameworks and AI-build environments — including its own Watson tech, as well as Tensorflow, SparkML, AWS SageMaker, and AzureML.
It says the service can be customized to specific organizational needs via programming to take account of the “unique decision factors of any business workflow”.
The fully automated SaaS explains decision-making and detects bias in AI models at runtime — so as decisions are being made — which means it’s capturing “potentially unfair outcomes as they occur”, as IBM puts it.
It will also automatically recommend data to add to the model to help mitigate any bias that has been detected.
Explanations of AI decisions include showing which factors weighted the decision in one direction vs another; the confidence in the recommendation; and the factors behind that confidence.
IBM also says the software keeps records of the AI model’s accuracy, performance and fairness, along with the lineage of the AI systems — meaning they can be “easily traced and recalled for customer service, regulatory or compliance reasons”.
For one example on the compliance front, the EU’s GDPR privacy framework references automated decision making, and includes a right for people to be given detailed explanations of how algorithms work in certain scenarios — meaning businesses may need to be able to audit their AIs.
The IBM AI scanner tool provides a breakdown of automated decisions via visual dashboards — an approach it bills as reducing dependency on “specialized AI skills”.
However it is also intending its own professional services staff to work with businesses to use the new software service. So it will be both selling AI, ‘a fix’ for AI’s imperfections, and experts to help smooth any wrinkles when enterprises are trying to fix their AIs… Which suggests that while AI will indeed remove some jobs, automation will be busy creating other types of work.
Nor is IBM the first professional services firm to spot a business opportunity around AI bias. A few months ago Accenture outed a fairness tool for identifying and fixing unfair AIs.
So with a major push towards automation across multiple industries there also looks to be a pretty sizeable scramble to set up and sell services to patch any problems that arise as a result of increasing use of AI.
And, indeed, to encourage more businesses to feel confident about jumping in and automating more. (On that front IBM cites research it conducted which found that while 82% of enterprises are considering AI deployments, 60% fear liability issues and 63% lack the in-house talent to confidently manage the technology.)
In additional to launching its own (paid for) AI auditing tool, IBM says its research division will be open sourcing an AI bias detection and mitigation toolkit — with the aim of encouraging “global collaboration around addressing bias in AI”.
“IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice,” said David Kenny, SVP of cognitive solutions at IBM, commenting in a statement. “We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision making.”