Detecting software vulnerabilities using Language Models
Recently, deep learning techniques have garnered substantial attention for their ability to identify vulnerable code patterns accurately. However, current state-of-the-art deep learning models, such as Convolutional Neural Networks (CNN), and Long Short-Term Memories (LSTMs) require substantial computational resources. This results in a level of overhead that makes their implementation unfeasible for deployment in realtime settings. This study presents a novel transformer-based vulnerability detection framework, referred to as VulDetect, which is achieved through the fine-tuning of a pre-trained large language model, (GPT) on various benchmark datasets of vulnerable code. Our empirical findings indicate that our framework is capable of identifying vulnerable software code with an accuracy of up to 92.65 outperforms SyseVR and VulDeBERT, two state-of-the-art vulnerability detection techniques
READ FULL TEXT