Predicting the Need for Test Maintenance Using LLM Agents - Applying Test Maintenance Factors to Changes in Production Code to Identify If and Where Test Cases Need to Be Updated

dc.contributor.authorLEMNER, LUDVIG
dc.contributor.authorWAHLGREN, LEMNER
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data och informationstekniksv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineeringen
dc.contributor.examinerFeldt, Robert
dc.contributor.supervisorGay, Gregory
dc.date.accessioned2025-02-11T14:21:23Z
dc.date.available2025-02-11T14:21:23Z
dc.date.issued2024
dc.date.submitted
dc.description.abstractTest maintenance, the act of modifying and updating test cases to ensure they keep up with the changes made in the production code, is a necessary but time-consuming and effort-intensive activity. One way to alleviate these efforts is by automating parts of the test maintenance process, however, setting up and maintaining automation tools can be time-consuming as well. Generative AI and Large Language Models (LLMs) offer new avenues for automation and lessening the test maintenance problem. One of these is through LLM agents, sophisticated AI systems that reason, plan, and use tools to help it achieve its goals. This thesis was conducted as an exploratory case study at Ericsson and investigated how generative AI can help ease test maintenance, specifically how LLM agents can be used to predict test maintenance. The thesis had three phases: Identifying factors that trigger test maintenance; exploring the capabilities of generative AI and how it might be used to help with test maintenance; and, using the results from the two previous phases, building a prototype to help predict if and if so where test maintenance is needed based on changes to the production code. We identified 40 factors that when changed in production code cause a need for test maintenance, and successfully demonstrated how they can be used as triggers in a setup with LLM agents. Out of the four different setups that were evaluated, we found that using multiple LLM agents coordinated by a planning agent, and giving these access to both production code and natural language summaries of test cases, worked best. We also, through a thorough literature review, identify test maintenance actions LLMs can take and help with. These demonstrate both the possibilities and current limitations of LLMs when it comes to test maintenance, and the results highlight how—though a large focus of LLM studies within software engineering has focused on code generation—the capabilities of LLMs are much broader. This study provides examples of how LLM agents can be used more broadly and all-encompassingly.
dc.identifier.coursecodeDATX05
dc.identifier.urihttp://hdl.handle.net/20.500.12380/309116
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectSoftware engineering (SE)
dc.subjecttest maintenance
dc.subjectlarge language model (LLM)
dc.subjectLLM agent
dc.titlePredicting the Need for Test Maintenance Using LLM Agents - Applying Test Maintenance Factors to Changes in Production Code to Identify If and Where Test Cases Need to Be Updated
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeSoftware engineering and technology (MPSOF), MSc

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
CSE 24-82 LL LW.pdf
Storlek:
3.1 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: