An Exploration of Explainability for Internal Stakeholders: A Qualitative Study

dc.contributor.authorAdmekie, Tedla Bayou
dc.contributor.authorPravin Karthick, Sushmitha
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data och informationstekniksv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineeringen
dc.contributor.examinerPenzenstadler, Birgit
dc.contributor.supervisorKnauss, Eric
dc.date.accessioned2024-03-27T17:08:59Z
dc.date.available2024-03-27T17:08:59Z
dc.date.issued2024
dc.date.submitted
dc.description.abstractAI applications are becoming increasingly prevalent across various domains and industries. However, the challenge of comprehending the inner workings of ML/AI systems extends beyond end users and significantly impacts AI/ML developers and testers. This research investigates Explainable Artificial Intelligence (XAI) aspects and challenges concerning internal stakeholders. We conducted a qualitative interview study involving experts and researchers specializing in explainability and engineering AI-based systems. Our findings emphasize the importance of explainability exclusively for internal stakeholders, an aspect that has received limited attention in previous research. We identified research gaps in the following areas: the lack of exploration into how explainability can enhance AI/ML model testing, the need for further investigation into existing XAI evaluation metrics that align with diverse internal stakeholder needs, and the knowledge gap surrounding the concept of XAI and its integration into existing processes. Additionally, we present the challenges that internal stakeholders encounter when incorporating explainability features. The key results show that explainability positively impacts testability, as it can serve as a tool for guiding the test process. There are several noticeable benefits of explainability methods (XAI) for both developers and testers such as explainability aids in debugging the model output, which is an essential aspect of error analysis, and in detecting potential biases in the data. Other benefits are discussed in this paper. Furthermore, there is a need for an accepted set of standardized metrics to assess the trustworthiness of explainability, which would evaluate the effectiveness of the explanations themselves. Our study offers foundational work for future research and underscores critical research gaps. The ability to design explainability for internal stakeholders holds the potential to facilitate the development of complex, AI/ML systems.
dc.identifier.coursecodeDATX05
dc.identifier.urihttp://hdl.handle.net/20.500.12380/307636
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectexplainability
dc.subjectXAI
dc.subjectAI/ML systems
dc.subjectAI testing
dc.subjectrequirements engineering
dc.subjectstakeholders
dc.subjectinternal stakeholders
dc.subjecttestability
dc.titleAn Exploration of Explainability for Internal Stakeholders: A Qualitative Study
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeSoftware engineering and technology (MPSOF), MSc
Ladda ner
Original bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
CSE 24-07 TBA SPK.pdf
Storlek:
1.05 MB
Format:
Adobe Portable Document Format
Beskrivning:
License bundle
Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: