The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
MTJSR(Multi-task Joint Sparse Representation)은 공유 희소 표현을 사용하여 다양한 문제를 함께 해결하기 위한 효율적인 MTL(다중 작업 학습) 방법 중 하나입니다. 쉬운 것부터 어려운 것까지 점차적으로 작업을 훈련하여 자기 주도 학습하는 인간의 학습 메커니즘을 기반으로 이 메커니즘을 MTJSR에 적용하고 자기 주도 학습을 통한 다중 작업 공동 희소 표현을 제안합니다(MTJSR-SP). ) 알고리즘. MTJSR-SP에서는 자기 주도 학습 메커니즘을 최적화 기능의 정규화 도구로 간주하고 이를 해결하기 위해 반복 최적화를 적용합니다. 기존 MTL 방법과 비교하여 MTJSR-SP는 잡음 및 이상치에 대해 더 견고합니다. MTJSR-SP의 효율성을 검증하기 위해 일부 데이터 세트, 즉 256개의 합성 데이터 세트, UCI 기계 학습 저장소의 XNUMX개 데이터 세트, 옥스포드 꽃 데이터 세트 및 Caltech-XNUMX 이미지 분류 데이터 세트에 대한 실험 결과가 사용되었습니다.
Lihua GUO
South China University of Technology
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Lihua GUO, "From Easy to Difficult: A Self-Paced Multi-Task Joint Sparse Representation Method" in IEICE TRANSACTIONS on Information,
vol. E101-D, no. 8, pp. 2115-2122, August 2018, doi: 10.1587/transinf.2017EDP7289.
Abstract: Multi-task joint sparse representation (MTJSR) is one kind of efficient multi-task learning (MTL) method for solving different problems together using a shared sparse representation. Based on the learning mechanism in human, which is a self-paced learning by gradually training the tasks from easy to difficult, I apply this mechanism into MTJSR, and propose a multi-task joint sparse representation with self-paced learning (MTJSR-SP) algorithm. In MTJSR-SP, the self-paced learning mechanism is considered as a regularizer of optimization function, and an iterative optimization is applied to solve it. Comparing with the traditional MTL methods, MTJSR-SP has more robustness to the noise and outliers. The experimental results on some datasets, i.e. two synthesized datasets, four datasets from UCI machine learning repository, an oxford flower dataset and a Caltech-256 image categorization dataset, are used to validate the efficiency of MTJSR-SP.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2017EDP7289/_p
부
@ARTICLE{e101-d_8_2115,
author={Lihua GUO, },
journal={IEICE TRANSACTIONS on Information},
title={From Easy to Difficult: A Self-Paced Multi-Task Joint Sparse Representation Method},
year={2018},
volume={E101-D},
number={8},
pages={2115-2122},
abstract={Multi-task joint sparse representation (MTJSR) is one kind of efficient multi-task learning (MTL) method for solving different problems together using a shared sparse representation. Based on the learning mechanism in human, which is a self-paced learning by gradually training the tasks from easy to difficult, I apply this mechanism into MTJSR, and propose a multi-task joint sparse representation with self-paced learning (MTJSR-SP) algorithm. In MTJSR-SP, the self-paced learning mechanism is considered as a regularizer of optimization function, and an iterative optimization is applied to solve it. Comparing with the traditional MTL methods, MTJSR-SP has more robustness to the noise and outliers. The experimental results on some datasets, i.e. two synthesized datasets, four datasets from UCI machine learning repository, an oxford flower dataset and a Caltech-256 image categorization dataset, are used to validate the efficiency of MTJSR-SP.},
keywords={},
doi={10.1587/transinf.2017EDP7289},
ISSN={1745-1361},
month={August},}
부
TY - JOUR
TI - From Easy to Difficult: A Self-Paced Multi-Task Joint Sparse Representation Method
T2 - IEICE TRANSACTIONS on Information
SP - 2115
EP - 2122
AU - Lihua GUO
PY - 2018
DO - 10.1587/transinf.2017EDP7289
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E101-D
IS - 8
JA - IEICE TRANSACTIONS on Information
Y1 - August 2018
AB - Multi-task joint sparse representation (MTJSR) is one kind of efficient multi-task learning (MTL) method for solving different problems together using a shared sparse representation. Based on the learning mechanism in human, which is a self-paced learning by gradually training the tasks from easy to difficult, I apply this mechanism into MTJSR, and propose a multi-task joint sparse representation with self-paced learning (MTJSR-SP) algorithm. In MTJSR-SP, the self-paced learning mechanism is considered as a regularizer of optimization function, and an iterative optimization is applied to solve it. Comparing with the traditional MTL methods, MTJSR-SP has more robustness to the noise and outliers. The experimental results on some datasets, i.e. two synthesized datasets, four datasets from UCI machine learning repository, an oxford flower dataset and a Caltech-256 image categorization dataset, are used to validate the efficiency of MTJSR-SP.
ER -