Out of Context: How important is Local Context in Neural Program Repair? - Université de Bordeaux
Communication Dans Un Congrès Année : 2024

Out of Context: How important is Local Context in Neural Program Repair?

Résumé

Deep learning source code models have been applied very successfully to the problem of automated program repair. One of the standing issues is the small input window of current models which often cannot fully fit the context code required for a bug fix (e.g., method or class declarations of a project). Instead, input is often restricted to the local context, that is, the lines below and above the bug location. In this work we study the importance of this local context on repair success: how much local context is needed?; is context before or after the bug location more important? how is local context tied to the bug type? To answer these questions we train and evaluate Transformer models in many different local context configurations on three datasets and two programming languages. Our results indicate that overall repair success increases with the size of the local context (albeit not for all bug types) and confirm the common practice that roughly 50-60% of the input window should be used for context leading the bug. Our results are not only relevant for researchers working on Transformer-based APR tools but also for benchmark and dataset creators who must decide what and how much context to include in their datasets.
Fichier principal
Vignette du fichier
2312.04986v1.pdf (1.64 Mo) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04797549 , version 1 (22-11-2024)

Identifiants

Citer

Julian Aron Prenner, Romain Robbes. Out of Context: How important is Local Context in Neural Program Repair?. 46th IEEE/ACM International Conference on Software Engineering, Apr 2024, Lisbon, Portugal. ⟨10.48550/arXiv.2312.04986⟩. ⟨hal-04797549⟩

Collections

CNRS UNIV-BORDEAUX
0 Consultations
0 Téléchargements

Altmetric

Partager

More