2023-09-29
Intent-calibrated self-training for answer selection in open-domain dialogues
Publication
Publication
Transactions of the Association for Computational Linguistics , Volume 11 p. 1232- 1249
Answer selection in open-domain dialogues aims to select an accurate answer from candidates. Recent success of answer selection models hinges on training with large amounts of labeled data. However, collecting large-scale labeled data is labor-intensive and time-consuming. In this paper, we introduce the predicted intent labels to calibrate answer labels in a self-training paradigm. Specifically, we propose the intent-calibrated self-training (ICAST) to improve the quality of pseudo answer labels through the intent-calibrated answer selection paradigm, in which we employ pseudo intent labels to help improve pseudo answer labels. We carry out extensive experiments on two benchmark datasets with open-domain dialogues. The experimental results show that ICAST outperforms baselines consistently with 1%, 5% and 10% labeled data. Specifically, it improves 2.06% and 1.00% of F1 score on the two datasets, compared with the strongest baseline with only 5% labeled data.
Additional Metadata | |
---|---|
doi.org/10.1162/tacl_a_00599 | |
Transactions of the Association for Computational Linguistics | |
Voice driven interaction in XR spaces | |
Organisation | Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands |
Deng, W., Pei, J., Ren, Z., Chen, Z., & Ren, P. (2023). Intent-calibrated self-training for answer selection in open-domain dialogues. Transactions of the Association for Computational Linguistics, 11, 1232–1249. doi:10.1162/tacl_a_00599 |