At first, there were no machines. Humans relied solely on manual labor to survive. Then they invented tools, and from those tools evolved more complex tools. Eventually, as a result of innovation spanning thousands of years, these tools became machines.

More...

This post was originally published by Jorge Torres, CTO MindsDB

Machines were invented with the purpose of augmenting our human and feeble capacities because, by design, they overpower our physical abilities. Mechanical machines give us leverage in terms of raw power and computing machines provide us with the means to generate, organize and process vast amounts of data at speeds that we simply can’t by ourselves.

We are now in the infancy of another human innovation brewed in the computing machine so that the machine can learn (Machine Learning or ML), reason and solve problems (Artificial Intelligence or AI) better than we can. If we get this right; These machines will be at the forefront of the next wave of human advancement. But what are the dangers and how can we avoid them?

Dangerous AI

The obvious dangers of AI are those that can harm humanity or life in one way or another. The idea of AI/ML has been around for less than 100 years, which is a rather small interval compared to 200,000 years since the homo sapiens came to be. As such, it is easy to wonder about the possible exponential evolution of AI and ask ourselves questions like; When are we going to build machines that surpass the human cognitive capabilities (The Singularity) and what will happen then?

I invite you, for now, to leave The Singularity questions to science fiction. And focus on one certain thing, which is that ML/AI as it stands today, already presents meaningful dangers and it is our responsibility to mitigate such liabilities ASAP. Moreover, it should be our strategy to continuously tackle the risks as AI continues to evolve. On the one hand, we want to maximize the chances of a positive outcome for that day in the future when “The Singularity” happens, if it ever happens. But in any case, we do want to be prepared for that and hence maximize our chances to make it to that future anyway.

To the light of the previously described dangers, let’s focus then on what we believe is one of the most pressing issues with ML/AI today, which finds its origins in the observed tendency to over-trust AI/ML-based systems as they stand now, all within the blurred ideal of making informed decisions and as such, it derives into what we should aim to be the next trend; which is to thoughtfully design AI-based systems that are meant to augment our human cognitive capabilities rather than to replace them.

Let’s begin with, what do I mean when I say over-trust AI/ML? Today, if you were faced with the decision of picking a teammate for a game of chess and your options were between the best human player or the best computer program at this task, you wouldn’t be crazy by picking the computer program. During this game, if you happen to win by blindly trusting the decisions that this program makes, wouldn’t you think that trusting such a program is the most informed decision? Are we making the best decision when relying blindly on a system that can take in all the data that a human can’t?

To answer those questions, we should also ask ourselves, what are the implications of blindly trusting a particular AI System? And acknowledge that the answer to that question varies depending on the situation. To further develop that thought, let’s examine the hypothesis that; there are cases in which increasing reliance on machines for decision making has already proved to be a threat to us humans.

One recent and sad example of this is found in aviation (the Boeing 737 MAX issues), where one could reason that aircraft are increasingly being controlled by the autopilot systems and pilots are relegated to only perform routine procedures during takeoff and landing. This has had some serious implications on the pilot’s ability to maneuver the aircraft in unexpected situations. One could also argue that the increasing reliance on machine-made decisions is causing pilots to forgo their cognitive skills and their ability to meander difficult situations because they are blindly trusting the autopilot systems. But is the answer to this issue, that we ditch the autopilot systems?

Perhaps one ideal scenario is that in which if experts (in this case the pilots) observe something that goes against their intuitions, that they can do what is natural to us humans, which is to question. Autopilot: what are you doing and why? if the answer is not convincing they (the pilots) should be able to take control of the situation at once, and as humans with the privilege of enormous cognitive capabilities: bare the responsibility of actions.

If you are among the ones to agree that the above would be an ideal solution, you are not alone, as it illustrates the need for what some call “Explainable AI” or XAI. For we should be able to ask and obtain explanations that make sense from any AI-based system as to the choices it is making. Being inquisitive is one of the very human traits that make us unique and we should, by all means, preserve such human attribute.

Explainable AI (XAI)

To give you a little bit of background without getting too much into the details; People at DARPA (the Defence Advanced Research Project Agency) coined the term Explainable AI (XAI) as a research initiative at to unravel one of the critical shortcomings of AI. The more sophisticated ML/AI models become the less interpretable they tend to be. Moreover, AI in its current form is designed to learn on specific domains and to learn from concrete examples of data, narrowed only to the specific problem they are trained to solve, for it still takes the human capacity of abstract thinking to understand the full context of the problem.

Given the narrow scope of understanding that these AI/ML-based systems have, it is natural to argue that if these algorithms are used for making critical decisions concerning someone’s life or society in general, then (it is obvious to me and I hope it is to you too) that we should not get rid of them, but we should not delegate these systems with the full responsibility of making such critical decisions.

To further elaborate on this, I’d suggest you read Michael Jordan (not the basketball player but a well-known engineer from Berkeley) who has put a length of his thoughts in his article called “Artificial Intelligence — The Revolution Hasn’t Happened Yet”. He and I agree on something; We should ask ourselves what are the important developments that ML/AI needs today, to safely augment the human capacity to solve very complicated problems. In my opinion, XAI is certainly one of those important developments.

XAI Important Questions

Think about this other example in which systems give recommendations: If a doctor uses an AI system to help her diagnose a medical condition, a predicted diagnosis is not enough, the logical flow is that the doctor should also be able to ask the system to also explain the predicted diagnosis, in an ideal scenario, the answer should also come in terms as interpretable as possible to the doctor, rather than explaining it in the complicated jargon used in the machine learning model and its tunable parameters.

This opens the invitation to all of us in Machine Learning disciplines to ask ourselves how do we build systems that can predict and also explain? Furthermore, acknowledge the importance of that endeavor. As domain experts in all disciplines empowered from explanations from AI/ML should be able to further the knowledge in their fields by leveraging the increasing amounts of data and computational power available to them.

Some of you might still be wondering, ultimately speaking, are domain experts really needed? For this I suggest you read “The bitter lesson” essay by Rich Sutton (one of the big thinkers at Google’s Deepmind), in it, he argues that domain expertise cannot compete with the pattern recognition capabilities of Machine Learning models. We don’t have to agree or disagree. However, it is my opinion that we take the word “compete” out of the equation and instead simply think of augmentation of our human cognitive capabilities.

Therefore, a second invitation is for these new tools to be designed such that everyone can understand that problems can be viewed from a data perspective, these tools should also be simple enough that anyone can be a “Data-Scientist” in their own domains and be empowered by the wonderful pattern recognition capabilities of Machine Learning to get new insights and discoveries, hidden in the data that at first sight may not be obvious to us humans. If such systems cannot only uncover patterns but also explain them to us, we will not only become aware of such exploratory capabilities of search and learn that machines posses but we can also shape the next form of human Intelligence Augmentation (IA).

That’s why there is a need to build a separate class of HCI (Human Computer Interface) which today, in my opinion, boils down to solving two problems:

FirstWhen using an AI system, what should be those basic questions that, regardless of the problem, we must be able to get answers to?

At MindsDB, we narrow the scope of this problem to predictions, as we believe that predictions ultimately lead to inform most decision making processes. As such, we better be able to answer three basic questions from any AI assisted decision:

  • Can I trust this prediction and why?
  • Why this prediction and not something else?
  • How can I make the predictions more reliable/better?

Second; How do we actually explain the answers to those questions to other humans?

DARPA has one separate team dedicated to studying the psychology of explanation. This team is totally focussed on unearthing the literary knowledge base on the science of explanation from a psychological point of view to build frameworks that help measure the effectiveness of an explanation. This is a step towards UX in XAI.

Apart from DARPA, there are many research initiatives in large corporations as well as universities that are working towards building tools for XAI. At MindsDB, we see that the track to developing systems capable of answering the fundamental questions of XAI follows two stages (Soft XAI and then Introspective AI):

Soft XAI

Imagine you are trying to understand why an animal behaves the way it does. But you can’t communicate with such animal? Similarly, you are trying to understand the rationale behind the decisions of an AI system but the system cannot currently answer on its own why it does the things it does. What would be some approaches to solve these problems?

Deep Approach

The deep approach, which is trying to understand everything that happens inside the system. We can try to understand in detail the system’s different building blocks and how they learn and how those tasks affect the net outcomes. If we are talking about artificial neural networks, for instance, this means looking at each perceptron’s parameters, learning what makes them ‘activate’ and how these activations propagate into concepts that we can understand. Interpreting these results however requires a high degree of specialization, and it’s been found to be hard to generalize.

A parallel to this is that of trying to derive explanations from our behavior by analyzing the neural pathways and the internals of the brain under certain stimuli, this can lead to important discoveries about how the brain works and both neuroscience and machine learning fields continue to learn from each other’s findings. However, we have experienced that an alternative to interpretable results can be obtained from a simpler approach.

Black Box Approach

The black box approach, which is to get answers by treating the system as exactly that, a black box that of which we don’t know its internals, and thus understanding its behavior only by manipulating its input and figuring out what’s interesting in the outputs. This provides a good framework to get a lot of insights and has worked well in other fields, all the way from physics to psychology.

Given the practicality of this approach, MindsDB has adopted it as its means of explainability, and in that road, we have been privileged to step on the shoulders of giants, for we have studied and borrowed ideas from others (Google’s What If, IBM’s Seq2Seq-VisVisLab at the HongKong University of Science and Technology, SHAPLIME) to name a few.

Introspective AI​​​​

Once we have achieved Soft XAI, we can actually teach the machines how to do it, and this will involve re-engineering AI/ML so that it can derive explanations of itself and why it behaves the way it does, in a way similar to how we learn to explain our actions and thoughts. We believe the next generation of AI systems will be taught to analyze its own behaviors.

In my opinion, Introspective AI will derive from a black box approach and our understanding of language. The reasoning behind it comes from the observation that us humans; we poorly understand our brains, but just the same can come up with explanations to why we think one way or another. We have evolved to be able to explain our actions and our interpretations of the world around us. One could argue that it is explainability itself the fundamental piece to transfer knowledge, has shaped our brains throughout evolution, setting humans apart as the only kind capable of continuously building knowledge on top of knowledge.

Looking Forward

With all the excitement around AI and its capabilities, any XAI technology has an important role to keep AI safe and sane for future generations. We at MindsDB hope to see and contribute to some real outcome in the next couple of years. By then, XAI will most likely become a statutory requirement in building any AI-powered application for the real world.

Shyam Purkayastha

About the author

Shyam is the Creator-in-Chief at RadioStudio. He is a technology buff and is passionate about bringing forth emerging technologies to showcase their true potential to the world. Shyam guides the team at RadioStudio, a bunch of technoholiks, to imagine, conceptualize and build ideas around emerging trends in information and communication technologies.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
TechForCXO Weekly Newsletter
TechForCXO Weekly Newsletter

TechForCXO - Our Weekly Newsletter Delivering Technology Use Case Insights for CXOs

>