Enhancing Code Refactoring in Python: Leveraging Large Language Models
| dc.contributor.author | Naik, Arpita | |
| dc.contributor.author | Rajan Shylaja, Rithika | |
| dc.contributor.department | Chalmers tekniska högskola / Institutionen för data och informationsteknik | sv |
| dc.contributor.department | Chalmers University of Technology / Department of Computer Science and Engineering | en |
| dc.contributor.examiner | Heyn, Hans-Martin | |
| dc.contributor.supervisor | Fotrousi, Farnaz | |
| dc.date.accessioned | 2025-09-10T12:56:26Z | |
| dc.date.issued | 2024 | |
| dc.date.submitted | ||
| dc.description.abstract | Software engineering is evolving rapidly, involving tasks in creating new reliable features. The challenges of maintaining the existing features are challenging due to the existence of code smells, resulting in technical debt and thus reducing the overall quality of the product. At the same time, manual refactoring is error-prone and time-consuming. LLMs are becoming increasingly prevalent across various parts of the software development life cycle. The study aims to improve code refactoring suggestions in Python using Large Language Models (LLMs) such as ChatGPT and GitHub Copilot. Code refactoring is an important aspect of software engineering to address code smells, which improves the code quality. The research aims to investigate the refactoring suggestions provided by LLMs compared to those provided by humans, specifically focusing on Python programming language. To determine this objective, qualitative research methods are incorporated, semi-structural interviews with developers to identify the prioritized and ignored code smells and challenges faced during the refactoring process in the industry. The result was analyzed using narrative analysis, pre-study on prompt engineering to get an optimized prompt comparing and analyzing code refactoring suggestions between two LLMs, the study also conducts a controlled experiment to compare refactoring suggestions between LLM’s and developers, the study identifies common refactoring patterns the ability of LLMs to offer correct and understandable refactoring solutions. The research further demonstrates that LLMs can assist in giving refactoring suggestions but human developers are better than LLMs when it comes to understanding complex code smells and addressing code structure. On the other hand, LLMs are capable of enhancing code structure consequently enhancing the process of refactoring. The results show the merits and demerits of code refactoring using LLMs and how they can be adopted in Python development code refactoring. | |
| dc.identifier.coursecode | DATX05 | |
| dc.identifier.uri | http://hdl.handle.net/20.500.12380/310455 | |
| dc.language.iso | eng | |
| dc.relation.ispartofseries | CSE 24-192 | |
| dc.setspec.uppsok | Technology | |
| dc.subject | LLM, Python, Code Refactoring, Software Engineering | |
| dc.title | Enhancing Code Refactoring in Python: Leveraging Large Language Models | |
| dc.type.degree | Examensarbete för masterexamen | sv |
| dc.type.degree | Master's Thesis | en |
| dc.type.uppsok | H | |
| local.programme | Software engineering and technology (MPSOF), MSc |
