Explainability
XAI answers ‘why’ and ‘how’ AI decisions are made
eXplainable AI (XAI) refers to methods and techniques in the application of
artificial intelligence technology (AI) such that the predictions of the machine
learning models can be interpreted and understood by humans.
XAI is an emerging branch of AI where AI systems are made to explain the
reasoning behind every decision made by them.
XAI answers ‘why’ and ‘how’ AI decisions are made
XAI Clarifies the inner working of ‘blackbox’ AI models
XAI improves the trust between human and the machine
Explainability is motivated due to lacking transparency of the black-box approaches, which do not foster trust and acceptance of AI generally and ML specifically. Rising legal and privacy aspects, e.g. with the new European General Data Protection Regulations will make black-box approaches difficult to use in business, because they often are not able to explain why a machine decision has been made.
The extent of an explanation currently may be, “There is a 95% chance this is what you should do,” but that’s it. When the algorithm can't describe it’s features that contributed towards identifying a legitimate target, it leaves room open for debate on racism and stereotype issues which bias the model. Moreover in case of a false target, explanations would help diagnose the cause of failure and help improve the model.