Identification of Hate Speech using Explanation Artificial Intelligence (XAI)

  • Ramesh Narwal, Dr. Himanshu Aggarwal

Abstract

Artificial Intelligence (AI) became the center of attraction as it surpassed human beings in various tasks like speech and image recognition, recommendation systems that lack reliability and explainability. AI models are considered black-box models due to understanding the underlying complex mechanism as they never justify predictions and decisions, which is the main reason for the trust issue with these models. Sometimes these AI systems make errors that could be tragic depending on the particular application. For example, driverless-car could lead to crashes, and the lives of humans depend on medical AI systems. There is a need for methods or models to explain these existing AI models to tackle the issues described above. That is the reason why Explainable AI (XAI) is a hot research topic. In this article, the authors explain various issues, concepts, and implications of XAI with the help of the Hate Speech identification case study. 

Published
2021-10-20
How to Cite
Ramesh Narwal, Dr. Himanshu Aggarwal. (2021). Identification of Hate Speech using Explanation Artificial Intelligence (XAI). Design Engineering, 2021(02), 1137 - 1141. Retrieved from http://thedesignengineering.com/index.php/DE/article/view/5516
Section
Articles