What is eXplainable AI (XAI)?

eXplainable AI (XAI) refers to methods and techniques in the application of
artificial intelligence technology (AI) such that the predictions of the machine
learning models can be interpreted and understood by humans.

XAI is an emerging branch of AI where AI systems are made to explain the
reasoning behind every decision made by them.

Explainability

XAI answers ‘why’ and ‘how’ AI decisions are made

Transparency

XAI Clarifies the inner working of ‘blackbox’ AI models

Trust

XAI improves the trust between human and the machine

Scope of eXplainable AI

  • With the current AI boom, there has been a recent trend towards consideringthe implications of artificial intelligence. Explainability in AI has been highlighted as an area of crucial importance by several research groups, such as Google, IBM and Harvard
  • Critical to developing fair, safe systems in which we can place our trust is explainable AI (XAI), the principle of designing systems whose decisions can be understood and interpreted. The core promise of XAI is to try understanding the “black box” of machine learning – systems whose internal workings are opaque and unfriendly to human eyes.
  • Without careful data management, AI models can learn spurious correlations in data. Using tools capable of visualizing the parts of the data which contribute to a result could help to spot problems in training, saving development time and creating more performant models. ...

Why we need eXplainable AI?

Explainability is motivated due to lacking transparency of the black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations will make black-box approaches difficult to use in business, because they often are not able to explain why a machine decision has been made.

The extent of an explanation currently may be, “There is a 95% chance this is what you should do,” but that’s it. When the algorithm can't describe it’s features that contributed towards identifying a legitimate target, it leaves room open for debate on racism and stereotype issues which bias the model. Moreover in case of a false target, explanations would help diagnose the cause of failure and help improve the model.

Benefits of eXplainable AI

  • Improves AI Model performance as explanation help pinpoint issues in data and feature behaviors.
  • Better Decision Making as explanation provides added info and confidence for Man-in-Middle to act wisely and decisively.
  • Gives a sense of control as an AI system owner; clearly knows AI system’s behavior and boundary.
  • Gives a sense of Safety as each decision can be subjected to pass through safety guidelines and alerts on its violation.
  • Build Trust with stakeholders who can see through all the reasoning of each decision made.
  • Monitor for Ethical issues and violation due to bias in training data.
  • Better mechanism to comply with Accountability requirements within the organization for auditing and other purposes.
  • Better adherence to Regulatory requirements (like GDPR) where ‘Right to Explain’ is must-have for a system.