Learning Autocompletion from Real-World Datasets

11/09/2020
by   Gareth Ari Aye, et al.
0

Code completion is a popular software development tool integrated into all major IDEs. Many neural language models have achieved promising results in completion suggestion prediction on synthetic benchmarks. However, a recent study When Code Completion Fails: a Case Study on Real-World Completions demonstrates that these results may not translate to improvements in real-world performance. To combat this effect, we train models on real-world code completion examples and find that these models outperform models trained on committed source code and working version snapshots by 12.8 respectively. We observe this improvement across modeling technologies and show through A/B testing that it corresponds to a 6.2 actual autocompletion usage. Furthermore, our study characterizes a large corpus of logged autocompletion usages to investigate why training on real-world examples leads to stronger models.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset