Resource-Aware and Explainable AI Systems for Secure and Scientific Applications

Sreenitha Kasarapu, George Mason University
Seminar
MCS Seminar Graphic featuring date and title for the event.

The widespread deployment of embedded and IoT devices has significantly improved connectivity and computational capabilities but has simultaneously increased exposure to cybersecurity threats, particularly malware. Traditional malware detection techniques, predominantly heuristic and statistical, often face limitations such as computational inefficiency and lack of interpretability, restricting their suitability for resource-constrained IoT environments. This research introduces a novel resource- and workload-aware malware detection framework leveraging model-parallelism for IoT devices. Utilizing lightweight regression model, the proposed system dynamically evaluates on-device resources and offloads the malware detection inference workload across neighboring nodes. This distributed approach maintains user privacy and ensures data integrity by partitioning the detection model across multiple IoT nodes. Experimental results demonstrate a 9.8× speedup in the malware detection latency while maintaining an accuracy of 96.7%.

To address interpretability challenges, the research incorporates various explainable AI (XAI) methodologies, systematically analyzing their reliability and consistency within cybersecurity contexts. This study identifies critical features influencing model decisions, thereby enhancing transparency and trustworthiness. Furthermore, this interpretability approach is extendable to the development and optimization of large language models (LLMs), achieving notable improvements in computational efficiency, specifically a 60% cost reduction through LoRA and prompt-tuning techniques. Additionally, a Graph Neural Network (GNN)-based trust evaluation framework was developed to facilitate secure distributed authentication in IoT networks, resulting in a 40% improvement in inference latency. Collectively, this research advances robust, efficient, and interpretable AI systems, contributing significantly to cybersecurity and broader scientific computing applications.