September 20, 2018

Cloud giant IBM launches tool to detect Artificial Intelligence bias

The tech behemoth aims to understand how algorithms make decisions in real time and which factors are used in making final recommendations.

Cloud giant

Cloud giant IBM have launched a tool to identify and detect signs of bias and recommendation adjustments in AI systems (Artificial Intelligence). With this launch, IBM will seek to mitigate AI bias by understanding how algorithms make real-time decisions and which factors are utilized while making final recommendations to enterprises on their dashboards.

According to IBM, bias is a serious problem in AI and algorithms used by tech giants and other firms are not always fair in their decision-making. They say that enterprises make increasingly automated decisions about a wide variety of issues such as policing, insurance etc. and the implications of their recommendations can have adverse effects.

By launching the AI bias detecting tool, IBM say that they will be providing insights into how AI systems make decisions. They say that the open source tool will help users with a visual dashboard that tracks the model’s record for accuracy, performance and fairness over real-time.

Speaking about the launch, David Kenny, Senior Vice President of Cognitive Solutions at IBM, commented:

IBM led the industry in establishing trust and transparency principles for the development of new AI technologies. It’s time to translate principles into practice.

We are giving new transparency and control to the businesses who use AI and face the most potential risk from any flawed decision-making.

The cloud giant also unveiled the Fairness 360 Kit containing a library of algorithms, code, and tutorials that demonstrate ways to implement bias detection in models. Aleksandra Mojsilovic, Head of AI foundations at IBM Research, commented:

The issue of trust in AI is top of mind for IBM and many other technology developers and providers.

AI-powered systems hold enormous potential to transform the way we live and work but also exhibit some vulnerabilities, such as exposure to bias, lack of explainability, and susceptibility to adversarial attacks.

These issues must be addressed in order for AI services to be trusted.

It will be interesting to see how the AI market pans out with this toolkit launch by IBM as well as Google Cloud launching their contact centre for AI.