Can OpenAI Codex and Other Large Language Models Help Us Fix Security Bugs?

12/03/2021
by   Hammond Pearce, et al.
0

Human developers can produce code with cybersecurity weaknesses. Can emerging 'smart' code completion tools help repair those weaknesses? In this work, we examine the use of large language models (LLMs) for code (such as OpenAI's Codex and AI21's Jurassic J-1) for zero-shot vulnerability repair. We investigate challenges in the design of prompts that coax LLMs into generating repaired versions of insecure code. This is difficult due to the numerous ways to phrase key information – both semantically and syntactically – with natural languages. By performing a large scale study of four commercially available, black-box, "off-the-shelf" LLMs, as well as a locally-trained model, on a mix of synthetic, hand-crafted, and real-world security bug scenarios, our experiments show that LLMs could collectively repair 100 generated and hand-crafted scenarios, as well as 58 selection of historical bugs in real-world open-source projects.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset