The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
최근에는 다중뷰 사전 학습 기법이 많은 연구 관심을 끌고 있다. 여러 다중 뷰 사전 학습 방법이 다루어졌지만 더 개선될 수 있습니다. 기존의 멀티뷰 사전 학습 방식은 대부분 l0 or l1- 표현 계수에 대한 정규 희소성 제약으로 인해 훈련 및 테스트 단계에 시간이 많이 걸립니다. 본 논문에서는 다중 뷰 합성 및 분석 사전 학습(MSADL)이라는 새로운 다중 뷰 사전 학습 접근 방식을 제안합니다. 이는 각각 하나의 뷰에 해당하고 구조화된 합성 사전과 구조화된 분석을 포함하는 여러 판별 사전 쌍을 공동으로 학습합니다. 사전. MSADL은 합성 사전을 활용하여 클래스별 재구성을 달성하고 분석 사전을 사용하여 선형 투영을 통해 식별 코드 계수를 생성합니다. 또한, 서로 다른 뷰에서 학습된 합성 사전 간의 중복성을 줄일 수 있도록 다중 뷰 사전 학습을 위한 비상관 항을 설계합니다. 널리 사용되는 두 가지 데이터 세트가 테스트 데이터로 사용됩니다. 실험 결과는 제안된 접근 방식의 효율성과 효과를 보여줍니다.
Fei WU
Nanjing University of Posts and Telecommunications (NJUPT)
Xiwei DONG
Nanjing University of Posts and Telecommunications (NJUPT)
Lu HAN
Nanjing University of Posts and Telecommunications (NJUPT)
Xiao-Yuan JING
Nanjing University of Posts and Telecommunications (NJUPT)
Yi-mu JI
NJUPT
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Fei WU, Xiwei DONG, Lu HAN, Xiao-Yuan JING, Yi-mu JI, "Multi-View Synthesis and Analysis Dictionaries Learning for Classification" in IEICE TRANSACTIONS on Information,
vol. E102-D, no. 3, pp. 659-662, March 2019, doi: 10.1587/transinf.2018EDL8107.
Abstract: Recently, multi-view dictionary learning technique has attracted lots of research interest. Although several multi-view dictionary learning methods have been addressed, they can be further improved. Most of existing multi-view dictionary learning methods adopt the l0 or l1-norm sparsity constraint on the representation coefficients, which makes the training and testing phases time-consuming. In this paper, we propose a novel multi-view dictionary learning approach named multi-view synthesis and analysis dictionaries learning (MSADL), which jointly learns multiple discriminant dictionary pairs with each corresponding to one view and containing a structured synthesis dictionary and a structured analysis dictionary. MSADL utilizes synthesis dictionaries to achieve class-specific reconstruction and uses analysis dictionaries to generate discriminative code coefficients by linear projection. Furthermore, we design an uncorrelation term for multi-view dictionary learning, such that the redundancy among synthesis dictionaries learned from different views can be reduced. Two widely used datasets are employed as test data. Experimental results demonstrate the efficiency and effectiveness of the proposed approach.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018EDL8107/_p
부
@ARTICLE{e102-d_3_659,
author={Fei WU, Xiwei DONG, Lu HAN, Xiao-Yuan JING, Yi-mu JI, },
journal={IEICE TRANSACTIONS on Information},
title={Multi-View Synthesis and Analysis Dictionaries Learning for Classification},
year={2019},
volume={E102-D},
number={3},
pages={659-662},
abstract={Recently, multi-view dictionary learning technique has attracted lots of research interest. Although several multi-view dictionary learning methods have been addressed, they can be further improved. Most of existing multi-view dictionary learning methods adopt the l0 or l1-norm sparsity constraint on the representation coefficients, which makes the training and testing phases time-consuming. In this paper, we propose a novel multi-view dictionary learning approach named multi-view synthesis and analysis dictionaries learning (MSADL), which jointly learns multiple discriminant dictionary pairs with each corresponding to one view and containing a structured synthesis dictionary and a structured analysis dictionary. MSADL utilizes synthesis dictionaries to achieve class-specific reconstruction and uses analysis dictionaries to generate discriminative code coefficients by linear projection. Furthermore, we design an uncorrelation term for multi-view dictionary learning, such that the redundancy among synthesis dictionaries learned from different views can be reduced. Two widely used datasets are employed as test data. Experimental results demonstrate the efficiency and effectiveness of the proposed approach.},
keywords={},
doi={10.1587/transinf.2018EDL8107},
ISSN={1745-1361},
month={March},}
부
TY - JOUR
TI - Multi-View Synthesis and Analysis Dictionaries Learning for Classification
T2 - IEICE TRANSACTIONS on Information
SP - 659
EP - 662
AU - Fei WU
AU - Xiwei DONG
AU - Lu HAN
AU - Xiao-Yuan JING
AU - Yi-mu JI
PY - 2019
DO - 10.1587/transinf.2018EDL8107
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E102-D
IS - 3
JA - IEICE TRANSACTIONS on Information
Y1 - March 2019
AB - Recently, multi-view dictionary learning technique has attracted lots of research interest. Although several multi-view dictionary learning methods have been addressed, they can be further improved. Most of existing multi-view dictionary learning methods adopt the l0 or l1-norm sparsity constraint on the representation coefficients, which makes the training and testing phases time-consuming. In this paper, we propose a novel multi-view dictionary learning approach named multi-view synthesis and analysis dictionaries learning (MSADL), which jointly learns multiple discriminant dictionary pairs with each corresponding to one view and containing a structured synthesis dictionary and a structured analysis dictionary. MSADL utilizes synthesis dictionaries to achieve class-specific reconstruction and uses analysis dictionaries to generate discriminative code coefficients by linear projection. Furthermore, we design an uncorrelation term for multi-view dictionary learning, such that the redundancy among synthesis dictionaries learned from different views can be reduced. Two widely used datasets are employed as test data. Experimental results demonstrate the efficiency and effectiveness of the proposed approach.
ER -