The Explainable AI (XAI) program aims to create a suite of machine learning techniques that produce more explainable models, while maintaining a high level of learning performance (prediction accuracy) and enable human users to understand, appropriately trust, and effectively manage the emerging generation of artificially intelligent partners. This article discusses how you can explain decisions that the artificial intelligence model has predicted (explaining decisions with AI) after undergoing training.

This article was originally published by Rulex.

How many algorithms do we encounter on a daily basis? The answer is much more than we think – from consulting Google maps to find the fastest route to that new restaurant, to scrolling through our Facebook feed.

But how does artificial intelligence decision-making work? And are all algorithms the same?

Relying blindly on a new technology is extremely risky, particularly when dealing with sensitive data. No one would want an important decision like a credit evaluation or a medical diagnosis to be decided by an algorithm that we can’t understand. This is where the GDPR and other international regulations come into play, requiring that algorithms involving personal information process data transparently and provide a clear explanation of any predictions made.

In this article we will discuss different types of artificial intelligence techniques, from black box to explainable AI (XAI), shedding some light on the subject.

Explainable what?

First, we should ask ourselves what “explainable” means in the context of AI. But before we ponder exactly what eXplainable AI (XAI) is, we should ask what we mean by transparent solutions and considers their explanations. We distinguish between two types of explanation:

  1. Process-based explanations: regarding governance of the AI solution, the best practices used, how it has been tested and trained, and why it’s robust and fair.
  2. Outcome-based explanations: regarding each specific decision made by the AI solution.

To be 100% explainable, AI solutions should provide both types of explanation. The first fosters trust in artificial intelligence, whereas the second explains the reasons behind each decision. The latter is required by law when the automated decisions affect people’s lives.

Shedding Light on AI

In general, there is a tendency to divide algorithms into two categories: black box and explainable algorithms. The difference lies in their ability to provide outcome-based explanations. When using black box algorithms, it is impossible to understand the logic that led to the output, whereas with explainable algorithms it is possible to explain both the process and the specific output. But the reality is a bit more complex than that. Some black box algorithms are more explainable than others, and some explainable algorithms are less explainable than others.

Explainable AI

XAI solutions produce explainable predictions, but some XAI solutions are less understandable than others, meaning that only AI specialists can explain the output, after complex analysis. This category includes, for example, regression, linear, logistic, and LASSO algorithms.

Conversely, some types of XAI solution have a very high level of explainability and transparency. In these solutions, the output is expressed as rules (e.g., IF-THEN rules: IF rain & IF work THEN take the car), which are easy for business experts to understand. Among the algorithms that fall into this category, we have created the Logic Learning Machine (LLM), which produces IF-THEN rules and has high performance levels.

The LLM can be used in a variety of business scenarios, especially when a high level of transparency is required by law to protect people’s rights. This happens, for example, when dealing with sensitive decisions like granting loans or detecting cases of fraud. The LLM can also be used to empower a business with high quality data, and to detect and correct data entry errors (read more).

Black Box AI

The term “black box” stems from the fact that the predictions are too complicated for any human to comprehend. The output of black box AI may be given by layer upon layer of interconnected computations involving multiple parameters – millions, or even billions. This makes it impossible to trace how the final result relates to the original input features, such as age or gender. The category includes neural networks, support vector machine, and deep learning algorithms.

The problem of black box AI can be mitigated by approximating the model with a more understandable one – “opening the black box”. However, the explanations obtained by the explainable model may be inaccurate in the best-case scenario, and misleading in the worst case, causing issues when they are applied to sensitive use-cases. Among these techniques are LIME, Shapley, and PDP (more info here). They therefore differ significantly from the aforementioned explainable techniques.

Proprietary algorithms are a case apart. They are not black box per se, but the companies who own them hide details of their AI system to protect their business. These are the types of AI we interact with most frequently: Google Search’s ranking algorithm, Amazon’s recommendation system, Facebook’s Newsfeed, and more.

Why Businesses Choose eXplainable AI

Mindful of the general call for data ethics, more and more businesses are now choosing eXplainable AI solutions. As shown by recent stats, the global XAI market is expected to grow by 513% by 2030, reaching a value of 21.7 billion U.S. dollars. Choosing eXplainable AI therefore brings major advantages for companies such as:

  1. Guaranteeing better and fairer decisions
  2. Building trust and credibility with customers
  3. Complying with the GDPR and international regulations
  4. Staying human-centric

About the author Staff

Showcasing and curating a knowledge base of tech use cases from across the web.

TechForCXO Weekly Newsletter
TechForCXO Weekly Newsletter

TechForCXO - Our Newsletter Delivering Technology Use Case Insights Every Two Weeks