Leveraging Generative AI for Code Refactoring: A Study on Efficiency, Maintainability, And Developer Productivity
Keywords:
Generative AI, Code Refactoring, Software Maintainability, Developer Productivity, Automation in Software EngineeringAbstract
This paper proposes a novel hybrid AI solution that integrates the capabilities of machine learning, deep learning, reinforcement learning, and natural language processing to enhance AI-based automated software testing and bug prediction in agile environments. The framework addresses the issues of adaptation, scalability, and accuracy and indeed fulfills the needs of development cycles that tend more towards dynamicity. The conclusion of the coordinated analysis of numerous real agile projects demonstrates its feasibility. It proves to be very efficient, as it provides excellent test coverage, effective bug detection, and a streamlined testing process. The final point is that such a study provides a practical blueprint for future migration to the industrial space, including those related to deployment with CI/CD pipelines and compliance with agile workflows. The contributions help both enhance the theoretical comprehension of AI-based quality assurance and equip practitioners with knowledge regarding the topic and its application in the real world.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Well Testing Journal

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
This license requires that re-users give credit to the creator. It allows re-users to distribute, remix, adapt, and build upon the material in any medium or format, for noncommercial purposes only.