Bridging the Trust Gap: Integrating Explainable AI with IoT-Enabled Machine Learning Systems
Integrating Explainable AI with IoT-Enabled Machine Learning Systems
Keywords:
Explainable AI (XAI), , Internet of Things (IoT),, Machine Learning (ML),, Trustworthy AI, , Edge Computing,Abstract
Due to the combination of the Internet of Things (IoT) and Machine Learning (ML), it has developed effective predictive and automated networks. Nonetheless, transparency, trust, and regulatory compliance of advanced ML models, especially deep learning, is highly challenging due to its black-box nature of operation, especially in high-stakes IoT applications. The paper discusses how it is possible to incorporate Explainable AI (XAI) procedures to improve the explanation and responsibility of ML models used in IoT systems. We compare the most recent XAI methods applicable to IoT-ML pipelines, examine how they can be applied in smart healthcare and industrial IoT, and discuss a framework to be employed. Such issues as the computational overhead, the generation of explanations in real-time, and the measurement of the evaluation are also open challenges discussed in the paper. Through a literature review of the recent literature, we present the argumentation that XAI is not an add-on to the deployment of intelligent IoT systems but an essential element to their sustainable and ethical deployment.