The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
본 논문에서는 동일한 모션 카테고리에서 다양한 변형이 존재하는 대규모 데이터베이스를 재사용하기 위해 필요한 모션 캡쳐 데이터에 대한 콘텐츠 기반 검색 알고리즘을 제시합니다. 가장 어려운 문제는 논리적으로 유사한 모션이 카테고리의 모션 변화로 인해 수치적으로 유사하지 않을 수 있다는 것입니다. 우리의 알고리즘은 쿼리와 논리적으로 유사한 동작을 효과적으로 검색할 수 있으며, 여기서 새로운 단기 기능 간의 거리 측정법은 시스템의 기본 구성 요소로 적절하게 정의됩니다. 전체 모션 캡처 시퀀스를 여러 개의 작은 중첩 클립으로 나눈 후 관절 속도의 단기 분석을 기반으로 특징을 추출합니다. 각 클립에서 우리는 관절 속도의 크기뿐만 아니라 동적 패턴도 특징으로 선택하여 카테고리에 중요한 모션 정보를 유지하면서 모션 변형을 삭제할 수 있습니다. 동시에 데이터 양이 줄어들어 계산 비용이 절감됩니다. 추출된 특징을 사용하여 두 모션 클립 간의 새로운 거리 측정법을 정의합니다. 동적 시간 왜곡을 통해 두 모션 캡처 시퀀스 간의 모션 차이 측정이 계산됩니다. 그런 다음 쿼리가 주어지면 모션 비유사성 측정에 따라 데이터 세트의 모든 모션 순위를 매깁니다. 190개 이상의 동작으로 구성된 테스트 데이터 세트에서 수행된 우리의 실험은 인기 있는 평가 척도에 따라 우리의 알고리즘이 두 가지 기존 방법에 비해 성능을 크게 향상시키는 것을 보여줍니다. P(NR).
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Jianfeng XU, Haruhisa KATO, Akio YONEYAMA, "Content-Based Retrieval of Motion Capture Data Using Short-Term Feature Extraction" in IEICE TRANSACTIONS on Information,
vol. E92-D, no. 9, pp. 1657-1667, September 2009, doi: 10.1587/transinf.E92.D.1657.
Abstract: This paper presents a content-based retrieval algorithm for motion capture data, which is required to re-use a large-scale database that has many variations in the same category of motions. The most challenging problem is that logically similar motions may not be numerically similar due to the motion variations in a category. Our algorithm can effectively retrieve logically similar motions to a query, where a distance metric between our novel short-term features is defined properly as a fundamental component in our system. We extract the features based on short-term analysis of joint velocities after dividing an entire motion capture sequence into many small overlapped clips. In each clip, we select not only the magnitude but also the dynamic pattern of the joint velocities as our features, which can discard the motion variations while keeping the significant motion information in a category. Simultaneously, the amount of data is reduced, alleviating the computational cost. Using the extracted features, we define a novel distance metric between two motion clips. By dynamic time warping, a motion dissimilarity measure is calculated between two motion capture sequences. Then, given a query, we rank all the motions in our dataset according to their motion dissimilarity measures. Our experiments, which are performed on a test dataset consisting of more than 190 motions, demonstrate that our algorithm greatly improves the performance compared to two conventional methods according to a popular evaluation measure P(NR).
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E92.D.1657/_p
부
@ARTICLE{e92-d_9_1657,
author={Jianfeng XU, Haruhisa KATO, Akio YONEYAMA, },
journal={IEICE TRANSACTIONS on Information},
title={Content-Based Retrieval of Motion Capture Data Using Short-Term Feature Extraction},
year={2009},
volume={E92-D},
number={9},
pages={1657-1667},
abstract={This paper presents a content-based retrieval algorithm for motion capture data, which is required to re-use a large-scale database that has many variations in the same category of motions. The most challenging problem is that logically similar motions may not be numerically similar due to the motion variations in a category. Our algorithm can effectively retrieve logically similar motions to a query, where a distance metric between our novel short-term features is defined properly as a fundamental component in our system. We extract the features based on short-term analysis of joint velocities after dividing an entire motion capture sequence into many small overlapped clips. In each clip, we select not only the magnitude but also the dynamic pattern of the joint velocities as our features, which can discard the motion variations while keeping the significant motion information in a category. Simultaneously, the amount of data is reduced, alleviating the computational cost. Using the extracted features, we define a novel distance metric between two motion clips. By dynamic time warping, a motion dissimilarity measure is calculated between two motion capture sequences. Then, given a query, we rank all the motions in our dataset according to their motion dissimilarity measures. Our experiments, which are performed on a test dataset consisting of more than 190 motions, demonstrate that our algorithm greatly improves the performance compared to two conventional methods according to a popular evaluation measure P(NR).},
keywords={},
doi={10.1587/transinf.E92.D.1657},
ISSN={1745-1361},
month={September},}
부
TY - JOUR
TI - Content-Based Retrieval of Motion Capture Data Using Short-Term Feature Extraction
T2 - IEICE TRANSACTIONS on Information
SP - 1657
EP - 1667
AU - Jianfeng XU
AU - Haruhisa KATO
AU - Akio YONEYAMA
PY - 2009
DO - 10.1587/transinf.E92.D.1657
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E92-D
IS - 9
JA - IEICE TRANSACTIONS on Information
Y1 - September 2009
AB - This paper presents a content-based retrieval algorithm for motion capture data, which is required to re-use a large-scale database that has many variations in the same category of motions. The most challenging problem is that logically similar motions may not be numerically similar due to the motion variations in a category. Our algorithm can effectively retrieve logically similar motions to a query, where a distance metric between our novel short-term features is defined properly as a fundamental component in our system. We extract the features based on short-term analysis of joint velocities after dividing an entire motion capture sequence into many small overlapped clips. In each clip, we select not only the magnitude but also the dynamic pattern of the joint velocities as our features, which can discard the motion variations while keeping the significant motion information in a category. Simultaneously, the amount of data is reduced, alleviating the computational cost. Using the extracted features, we define a novel distance metric between two motion clips. By dynamic time warping, a motion dissimilarity measure is calculated between two motion capture sequences. Then, given a query, we rank all the motions in our dataset according to their motion dissimilarity measures. Our experiments, which are performed on a test dataset consisting of more than 190 motions, demonstrate that our algorithm greatly improves the performance compared to two conventional methods according to a popular evaluation measure P(NR).
ER -