Is GitHub's Copilot as Bad As Humans at Introducing Vulnerabilities in Code?

04/10/2022
by   Owura Asare, et al.
0

Several advances in deep learning have been successfully applied to the software development process. Of recent interest is the use of neural language models to build tools that can assist in writing code. There is a growing body of work to evaluate these tools and their underlying language models. We aim to contribute to this line of research via a comparative empirical analysis of these tools and language models from a security perspective. For the rest of this paper, we use CGT (Code Generation Tool) to refer to language models as well as other tools, such as Copilot, that are built with language models. The aim of this study is to compare the performance of one CGT, Copilot, with the performance of human developers. Specifically, we investigate whether Copilot is just as likely to introduce the same software vulnerabilities as human developers. We will use the Big-Vul dataset proposed by Fan et al. - a dataset of vulnerabilities introduced by human developers. For each entry in the dataset, we will recreate the scenario before the bug was introduced and allow Copilot to generate a completion. The completions are manually inspected by three independent coders in order to be classified as 1. containing the same vulnerability (introduced by the human), 2. containing a fix for the vulnerability or 3. other. The "other" category is used as a catchall for scenarios that are out of scope for this project.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset