The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
본 논문에서는 사전에 카테고리 개수를 설정하지 않고 특징점을 선택하고 객체 카테고리를 분류하는 비지도 학습 기반 방법을 제시한다. 우리의 방법은 1) SIFT(Scale-Invariant Feature Transform)를 사용하여 특징점 검출 및 특징 설명, 2) OC-SVM(One Class-Support Vector Machines)을 사용하여 대상 특징점 선택, 3의 절차로 구성됩니다. ) Self-Organizing Maps(SOM)을 사용하여 선택된 특징점의 각 이미지에서 모든 SIFT 설명자의 시각적 단어 및 히스토그램 생성, 4) 적응 공명 이론-2(ART-2)를 사용한 레이블 형성, 5) 생성 및 분류 카테고리 간의 공간 관계를 시각화하기 위한 CPN(Counter Propagation Networks)의 카테고리 맵에 있는 카테고리. Caltech-256 객체 카테고리 데이터셋을 이용한 정적 영상과 로봇을 이용하여 움직임에 따라 획득한 시계열 영상을 이용한 동적 영상의 분류 결과는 각각 본 방법이 시계열 특성을 유지하면서 카테고리의 공간적 관계를 시각화할 수 있음을 보여주었다. 또한, 우리는 물체의 외관 변화에 대한 범주 분류 방법의 효율성을 강조합니다.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Masahiro TSUKADA, Yuya UTSUMI, Hirokazu MADOKORO, Kazuhito SATO, "Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot" in IEICE TRANSACTIONS on Information,
vol. E94-D, no. 1, pp. 127-136, January 2011, doi: 10.1587/transinf.E94.D.127.
Abstract: This paper presents an unsupervised learning-based method for selection of feature points and object category classification without previous setting of the number of categories. Our method consists of the following procedures: 1)detection of feature points and description of features using a Scale-Invariant Feature Transform (SIFT), 2)selection of target feature points using One Class-Support Vector Machines (OC-SVMs), 3)generation of visual words of all SIFT descriptors and histograms in each image of selected feature points using Self-Organizing Maps (SOMs), 4)formation of labels using Adaptive Resonance Theory-2 (ART-2), and 5)creation and classification of categories on a category map of Counter Propagation Networks (CPNs) for visualizing spatial relations between categories. Classification results of static images using a Caltech-256 object category dataset and dynamic images using time-series images obtained using a robot according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category classification of appearance changes of objects.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E94.D.127/_p
부
@ARTICLE{e94-d_1_127,
author={Masahiro TSUKADA, Yuya UTSUMI, Hirokazu MADOKORO, Kazuhito SATO, },
journal={IEICE TRANSACTIONS on Information},
title={Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot},
year={2011},
volume={E94-D},
number={1},
pages={127-136},
abstract={This paper presents an unsupervised learning-based method for selection of feature points and object category classification without previous setting of the number of categories. Our method consists of the following procedures: 1)detection of feature points and description of features using a Scale-Invariant Feature Transform (SIFT), 2)selection of target feature points using One Class-Support Vector Machines (OC-SVMs), 3)generation of visual words of all SIFT descriptors and histograms in each image of selected feature points using Self-Organizing Maps (SOMs), 4)formation of labels using Adaptive Resonance Theory-2 (ART-2), and 5)creation and classification of categories on a category map of Counter Propagation Networks (CPNs) for visualizing spatial relations between categories. Classification results of static images using a Caltech-256 object category dataset and dynamic images using time-series images obtained using a robot according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category classification of appearance changes of objects.},
keywords={},
doi={10.1587/transinf.E94.D.127},
ISSN={1745-1361},
month={January},}
부
TY - JOUR
TI - Unsupervised Feature Selection and Category Classification for a Vision-Based Mobile Robot
T2 - IEICE TRANSACTIONS on Information
SP - 127
EP - 136
AU - Masahiro TSUKADA
AU - Yuya UTSUMI
AU - Hirokazu MADOKORO
AU - Kazuhito SATO
PY - 2011
DO - 10.1587/transinf.E94.D.127
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E94-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2011
AB - This paper presents an unsupervised learning-based method for selection of feature points and object category classification without previous setting of the number of categories. Our method consists of the following procedures: 1)detection of feature points and description of features using a Scale-Invariant Feature Transform (SIFT), 2)selection of target feature points using One Class-Support Vector Machines (OC-SVMs), 3)generation of visual words of all SIFT descriptors and histograms in each image of selected feature points using Self-Organizing Maps (SOMs), 4)formation of labels using Adaptive Resonance Theory-2 (ART-2), and 5)creation and classification of categories on a category map of Counter Propagation Networks (CPNs) for visualizing spatial relations between categories. Classification results of static images using a Caltech-256 object category dataset and dynamic images using time-series images obtained using a robot according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category classification of appearance changes of objects.
ER -