The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
본 연구는 교육용 만화를 읽는 동안 주관적 난이도를 추정하는 데 사용할 수 있는 중요한 눈-응시 기능을 탐색합니다. 교육 만화는 일러스트레이션과 텍스트를 사용하여 어려운 주제를 가르치는 유망한 방법으로 빠르게 성장했습니다. 그러나 만화는 한 페이지에 다양한 정보가 포함되어 있어 Learning Analytics 분야에서 일반적인 시스템 로그 기반 감지와 같은 접근 방식으로는 주관적 난이도와 같은 학습자의 상태를 자동으로 감지하기 어렵다. 이 문제를 해결하기 위해 본 연구는 주관적 난이도 28도를 추정하기 위해 “Variance in Gaze Convergence”, “Movement between Panels”, “Movement between Tiles”라는 1가지 새로운 기능을 제안하는 것을 포함하여 0.721개의 시선 응시 기능에 집중하였다. 그런 다음 가상 현실(VR)을 사용하여 시뮬레이션된 환경에서 실험을 실행하여 시선 정보를 정확하게 수집했습니다. 페이지 단위와 패널 단위의 두 가지 단위 수준에서 기능을 추출하고 각각 사용자 종속 및 사용자 독립적 설정에서 각 패턴의 정확도를 평가했습니다. 우리가 제안한 기능은 SVM(Support Vector Machine)에 의해 훈련된 패널 단위 수준의 사용자 종속 및 사용자 독립적 모델에서 평균 F0.742 분류 점수 XNUMX 및 XNUMX를 달성했습니다.
Kenya SAKAMOTO
Osaka University
Shizuka SHIRAI
Osaka University
Noriko TAKEMURA
Osaka University,Kyushu Institute of Technology
Jason ORLOSKY
Osaka University,Augusta University
Hiroyuki NAGATAKI
Osaka University
Mayumi UEDA
Osaka University,University of Marketing and Distribution Sciences
Yuki URANISHI
Osaka University
Haruo TAKEMURA
Osaka University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Kenya SAKAMOTO, Shizuka SHIRAI, Noriko TAKEMURA, Jason ORLOSKY, Hiroyuki NAGATAKI, Mayumi UEDA, Yuki URANISHI, Haruo TAKEMURA, "Subjective Difficulty Estimation of Educational Comics Using Gaze Features" in IEICE TRANSACTIONS on Information,
vol. E106-D, no. 5, pp. 1038-1048, May 2023, doi: 10.1587/transinf.2022EDP7100.
Abstract: This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2022EDP7100/_p
부
@ARTICLE{e106-d_5_1038,
author={Kenya SAKAMOTO, Shizuka SHIRAI, Noriko TAKEMURA, Jason ORLOSKY, Hiroyuki NAGATAKI, Mayumi UEDA, Yuki URANISHI, Haruo TAKEMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Subjective Difficulty Estimation of Educational Comics Using Gaze Features},
year={2023},
volume={E106-D},
number={5},
pages={1038-1048},
abstract={This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).},
keywords={},
doi={10.1587/transinf.2022EDP7100},
ISSN={1745-1361},
month={May},}
부
TY - JOUR
TI - Subjective Difficulty Estimation of Educational Comics Using Gaze Features
T2 - IEICE TRANSACTIONS on Information
SP - 1038
EP - 1048
AU - Kenya SAKAMOTO
AU - Shizuka SHIRAI
AU - Noriko TAKEMURA
AU - Jason ORLOSKY
AU - Hiroyuki NAGATAKI
AU - Mayumi UEDA
AU - Yuki URANISHI
AU - Haruo TAKEMURA
PY - 2023
DO - 10.1587/transinf.2022EDP7100
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E106-D
IS - 5
JA - IEICE TRANSACTIONS on Information
Y1 - May 2023
AB - This study explores significant eye-gaze features that can be used to estimate subjective difficulty while reading educational comics. Educational comics have grown rapidly as a promising way to teach difficult topics using illustrations and texts. However, comics include a variety of information on one page, so automatically detecting learners' states such as subjective difficulty is difficult with approaches such as system log-based detection, which is common in the Learning Analytics field. In order to solve this problem, this study focused on 28 eye-gaze features, including the proposal of three new features called “Variance in Gaze Convergence,” “Movement between Panels,” and “Movement between Tiles” to estimate two degrees of subjective difficulty. We then ran an experiment in a simulated environment using Virtual Reality (VR) to accurately collect gaze information. We extracted features in two unit levels, page- and panel-units, and evaluated the accuracy with each pattern in user-dependent and user-independent settings, respectively. Our proposed features achieved an average F1 classification-score of 0.721 and 0.742 in user-dependent and user-independent models at panel unit levels, respectively, trained by a Support Vector Machine (SVM).
ER -