SelfEvolve: A Code Evolution Framework via Large Language Models

06/05/2023
by   Shuyang Jiang, et al.
0

Large language models (LLMs) have already revolutionized code generation, after being pretrained on publicly available code data. However, while various methods have been proposed to augment LLMs with retrieved knowledge and enhance the quality of code generation, the performance of these retrieval-based methods is limited by the strength of the retrievers used. In addition, while LLMs show great emergent ability, they still struggle to produce the correct code in one turn. To address these challenges, we propose a novel two-step pipeline, called , that leverages LLMs as both knowledge providers and self-reflective programmers. Unlike retrieval-based methods,  obtains the knowledge from input prompts and generates intermediate code based on the generated knowledge. After that,  asks LLM to act as an expert programmer to perform debugging for the generated code. This is achieved by receiving the error message from the interpreter, without requiring special test cases for correctness verification. We evaluate  on three code generation datasets, including DS-1000 for data science code, HumanEval for software engineering code, and TransCoder for C++-to-Python translation. Our empirical experiments show that  outperforms strong baselines by a significant margin on all datasets. We also conduct exhaustive analytical experiments to validate the effectiveness of the two stages of , and find that both are superior to other prompting-based methods. Further scalability analysis demonstrates that  can be adapted to other more advanced models, such as GPT-4, and bring consistent efficacy improvement.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset