Enhancing safety in AI-based object detection for autonomous vehicles through out-of-distribution monitoring
Date
Authors
Type
Examensarbete för masterexamen
Master's Thesis
Master's Thesis
Model builders
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
The advent of Artificial Intelligence (AI) has revolutionized the automotive industry, introducing advanced functionalities such as object detection in autonomous vehicles. However, the inherent weaknesses of AI systems—including prediction uncertainties, limited interpretability, and susceptibility to adversarial attacks—pose significant safety risks. Existing safety standards like ISO 26262 and ISO 21448 are inadequate for addressing the non-deterministic and probabilistic nature of AI systems. This thesis addresses these challenges by developing and implementing a novel monitoring mechanism based on out-of-distribution (OOD) anomaly detection to enhance the reliability and safety of AI-based object detection systems in autonomous driving. A comprehensive simulation platform was developed using the Carla software to
generate street scene images, ensuring complete data autonomy and enabling future scenario simulations for robust validation. The methodology compared the direct use of training images with the extraction and analysis of feature values from hidden layers of deep learning models. Through iterative testing and scenario-based clustering, a feature distance method based on hidden layer outputs was identified as an effective metric for implementing the monitoring mechanism. This approach enhances the system’s ability to detect anomalies and distributional shifts in real-time, addressing safety concerns associated with AI unpredictability. Experimental results demonstrate a global negative correlation between model performance and feature distance, effectively identifying outliers—such as irrelevant animal images—that deviate significantly from the operational design domain data. The feature distance method improves the detection rate of out-of-distribution samples, proving its industrial applicability within the simulation environment. While real-world testing was not conducted in this study, future work will focus on validating the proposed mechanism in actual autonomous driving systems. This thesis contributes to the research field by introducing a viable safety strategy for AI-implemented automotive functions, aligning with emerging safety standards tailored for AI systems. The proposed monitoring approach holds potential for patent development and future integration into Advanced Driver-Assistance Systems (ADAS) and Autonomous Driving (AD) products. Future research will extend this mechanism to other AI functionalities and explore its scalability and efficiency in real-world scenarios.
Description
Keywords
AI safety, out-of-distribution detection, object detection, autonomous driving, CARLA Simulation, ISO 26262