The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
질문 응답(QA) 시스템은 주어진 정보를 기반으로 하거나 외부 정보의 도움을 받아 질문에 대답하도록 설계되었습니다. 최근 QA 시스템의 발전은 금융, 스포츠, 바이오의학 등 광범위한 분야에서 사용되는 딥러닝 기술에 의해 압도적으로 기여하고 있습니다. 오픈 도메인 QA의 생성적 QA의 경우 딥 러닝이 대규모 데이터를 활용하여 의미 있는 특징 표현을 학습하고 답변으로 자유 텍스트를 생성할 수 있지만 답변의 길이와 내용을 제한하는 문제는 여전히 남아 있습니다. 이 문제를 완화하기 위해 우리는 생성적 QA의 변형 YNQA에 초점을 맞추고 모델 CasATT(문장 수준 주의 메커니즘을 갖춘 계단식 프롬프트 학습 프레임워크)를 제안합니다. CasATT에서는 문서 수준에서 문장 수준까지 텍스트 의미 정보를 발굴하고 검색 및 순위 지정을 통해 대규모 문서에서 증거를 정확하게 마이닝하고 차별적인 질문 답변을 통해 순위가 지정된 후보자와의 질문에 답변합니다. 여러 데이터세트에 대한 실험은 IR&QA 경쟁 데이터세트에서 93.1%, BoolQ 데이터세트에서 90.5%의 정확도 점수를 달성할 수 있는 최첨단 기준선에 비해 CasATT의 우수한 성능을 보여줍니다.
Xiaoguang YUAN
National University of Defense Technology,Beijing Institute of Computer Technology and Application
Chaofan DAI
National University of Defense Technology
Zongkai TIAN
Beijing Institute of Computer Technology and Application
Xinyu FAN
Beijing Institute of Computer Technology and Application
Yingyi SONG
Beijing Institute of Computer Technology and Application
Zengwen YU
Beijing Institute of Computer Technology and Application,Xidian University
Peng WANG
Southeast University
Wenjun KE
Southeast University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Xiaoguang YUAN, Chaofan DAI, Zongkai TIAN, Xinyu FAN, Yingyi SONG, Zengwen YU, Peng WANG, Wenjun KE, "Discriminative Question Answering via Cascade Prompt Learning and Sentence Level Attention Mechanism" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 9, pp. 1584-1599, September 2023, doi: 10.1587/transinf.2022EDP7225.
Abstract: Question answering (QA) systems are designed to answer questions based on given information or with the help of external information. Recent advances in QA systems are overwhelmingly contributed by deep learning techniques, which have been employed in a wide range of fields such as finance, sports and biomedicine. For generative QA in open-domain QA, although deep learning can leverage massive data to learn meaningful feature representations and generate free text as answers, there are still problems to limit the length and content of answers. To alleviate this problem, we focus on the variant YNQA of generative QA and propose a model CasATT (cascade prompt learning framework with the sentence-level attention mechanism). In the CasATT, we excavate text semantic information from document level to sentence level and mine evidence accurately from large-scale documents by retrieval and ranking, and answer questions with ranked candidates by discriminative question answering. Our experiments on several datasets demonstrate the superior performance of the CasATT over state-of-the-art baselines, whose accuracy score can achieve 93.1% on IR&QA Competition dataset and 90.5% on BoolQ dataset.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7225/_p
부
@ARTICLE{e106-d_9_1584,
author={Xiaoguang YUAN, Chaofan DAI, Zongkai TIAN, Xinyu FAN, Yingyi SONG, Zengwen YU, Peng WANG, Wenjun KE, },
journal={IEICE TRANSACTIONS on Information},
title={Discriminative Question Answering via Cascade Prompt Learning and Sentence Level Attention Mechanism},
year={2023},
volume={E106-D},
number={9},
pages={1584-1599},
abstract={Question answering (QA) systems are designed to answer questions based on given information or with the help of external information. Recent advances in QA systems are overwhelmingly contributed by deep learning techniques, which have been employed in a wide range of fields such as finance, sports and biomedicine. For generative QA in open-domain QA, although deep learning can leverage massive data to learn meaningful feature representations and generate free text as answers, there are still problems to limit the length and content of answers. To alleviate this problem, we focus on the variant YNQA of generative QA and propose a model CasATT (cascade prompt learning framework with the sentence-level attention mechanism). In the CasATT, we excavate text semantic information from document level to sentence level and mine evidence accurately from large-scale documents by retrieval and ranking, and answer questions with ranked candidates by discriminative question answering. Our experiments on several datasets demonstrate the superior performance of the CasATT over state-of-the-art baselines, whose accuracy score can achieve 93.1% on IR&QA Competition dataset and 90.5% on BoolQ dataset.},
keywords={},
doi={10.1587/transinf.2022EDP7225},
ISSN={1745-1361},
month={September},}
부
TY - JOUR
TI - Discriminative Question Answering via Cascade Prompt Learning and Sentence Level Attention Mechanism
T2 - IEICE TRANSACTIONS on Information
SP - 1584
EP - 1599
AU - Xiaoguang YUAN
AU - Chaofan DAI
AU - Zongkai TIAN
AU - Xinyu FAN
AU - Yingyi SONG
AU - Zengwen YU
AU - Peng WANG
AU - Wenjun KE
PY - 2023
DO - 10.1587/transinf.2022EDP7225
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2023
AB - Question answering (QA) systems are designed to answer questions based on given information or with the help of external information. Recent advances in QA systems are overwhelmingly contributed by deep learning techniques, which have been employed in a wide range of fields such as finance, sports and biomedicine. For generative QA in open-domain QA, although deep learning can leverage massive data to learn meaningful feature representations and generate free text as answers, there are still problems to limit the length and content of answers. To alleviate this problem, we focus on the variant YNQA of generative QA and propose a model CasATT (cascade prompt learning framework with the sentence-level attention mechanism). In the CasATT, we excavate text semantic information from document level to sentence level and mine evidence accurately from large-scale documents by retrieval and ranking, and answer questions with ranked candidates by discriminative question answering. Our experiments on several datasets demonstrate the superior performance of the CasATT over state-of-the-art baselines, whose accuracy score can achieve 93.1% on IR&QA Competition dataset and 90.5% on BoolQ dataset.
ER -