Knowledge-Guided Prompt Learning for Request Quality Assurance in Public Code Review
Summary of differences from the previously published work titled On Unified Prompt Tuning for Request Quality Assurance in Public Code at DASFAA 2024 which has been cited in this submission as [11].
We have made the summary of differences explicit in the submission (the 6th paragraph of the Introduction section) as stated below:
This paper extends the preliminary version of our study, as presented in [11]. In comparison, this extended version (1) introduces external knowledge from Wikipedia to generate prefix vectors as soft prompt for the representation of code segments, (2) designs the knowledge-guided prompt learning on the request necessity prediction subtask and the tag recommendation subtask to show that the knowledge guide can further improve the request quality of PCR, (3) comprehensively evaluates the effects of our task-descriptive prompt template to explore the influence of external knowledge on unifiedly modelling the above two subtasks, (4) analyzes the time complexity to prove that our KP-PCR can improve task accuracy of two subtasks while maintaining model’s overall efficiency, and (5) conducts a case study to investigate the performance of our prompt template on large language models. Additionally, we have examined the challenges of how to design hard and soft prompt templates and delved further into the originally explored unified modelling in the preliminary study.
In the following, we pinpoint the key differences in analysis, methodology, and results compared with [11].
A. Motivation
The submission significantly extends the previous motivation in [11] by:
-
•
adding a motivating example as shown in Section 2 to explain the current situation and existing problems in the research field to provide motivation for our study.
B. Methodology
The submission significantly extends the previous methodology in [11] by:
-
•
introducing an external knowledge base by soft prompt to generate prefix vectors as shown in Fig. 4.
-
•
adding three RQs in Section 5 about experimental setup.
-
•
adding knowledge-guided prompt template design in Section 6.3.
-
•
adding a case study in Section 6.4.
C. Results
The submission significantly extends the previous results in [11] by:
-
•
adding the time complexity analysis of the method as shown in Section 4.3.1.
-
•
adding the task accuracy of four different prompt templates.
-
•
comparing the performance of our unified prompt template with that of large language models.