The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
일상 대화에서 감정 상태로부터 얼굴 표정을 생성하기 위한 프레임워크가 설명됩니다. 감정 상태와 얼굴 표정 사이의 매핑을 제공합니다. 전자는 심리적으로 정의된 추상 차원의 벡터로 표현되고 후자는 얼굴 동작 코딩 시스템으로 코딩됩니다. 매핑을 얻기 위해 여성 화자의 발화에 대해 감정 상태와 표정을 평가한 병렬 데이터를 수집하고 이 데이터로 신경망을 훈련했습니다. 제안한 방법의 유효성은 주관적 평가 테스트를 통해 검증하였다. 그 결과, 생성된 표정의 적합도에 대한 평균 의견 점수는 화자의 경우 3.86점으로 손으로 만든 표정의 적합도에 가까운 것으로 나타났다.
대화, 준언어적 정보, 감정 상태, 얼굴 동작 코딩 시스템, 화신
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Hiroki MORI, Koh OHSHIMA, "Facial Expression Generation from Speaker's Emotional States in Daily Conversation" in IEICE TRANSACTIONS on Information,
vol. E91-D, no. 6, pp. 1628-1633, June 2008, doi: 10.1093/ietisy/e91-d.6.1628.
Abstract: A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
URL: https://global.ieice.org/en_transactions/information/10.1093/ietisy/e91-d.6.1628/_p
부
@ARTICLE{e91-d_6_1628,
author={Hiroki MORI, Koh OHSHIMA, },
journal={IEICE TRANSACTIONS on Information},
title={Facial Expression Generation from Speaker's Emotional States in Daily Conversation},
year={2008},
volume={E91-D},
number={6},
pages={1628-1633},
abstract={A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.},
keywords={},
doi={10.1093/ietisy/e91-d.6.1628},
ISSN={1745-1361},
month={June},}
부
TY - JOUR
TI - Facial Expression Generation from Speaker's Emotional States in Daily Conversation
T2 - IEICE TRANSACTIONS on Information
SP - 1628
EP - 1633
AU - Hiroki MORI
AU - Koh OHSHIMA
PY - 2008
DO - 10.1093/ietisy/e91-d.6.1628
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E91-D
IS - 6
JA - IEICE TRANSACTIONS on Information
Y1 - June 2008
AB - A framework for generating facial expressions from emotional states in daily conversation is described. It provides a mapping between emotional states and facial expressions, where the former is represented by vectors with psychologically-defined abstract dimensions, and the latter is coded by the Facial Action Coding System. In order to obtain the mapping, parallel data with rated emotional states and facial expressions were collected for utterances of a female speaker, and a neural network was trained with the data. The effectiveness of proposed method is verified by a subjective evaluation test. As the result, the Mean Opinion Score with respect to the suitability of generated facial expression was 3.86 for the speaker, which was close to that of hand-made facial expressions.
ER -