The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
본 논문에서는 뉴런의 시그모이드 활성화 함수의 기울기를 조작하여 피드포워드 신경망(줄여서 NN)의 내결함성을 향상시키는 학습 알고리즘을 제안합니다. 연결 링크의 0에서 멈춤 및 1에서 멈춤 오류를 가정합니다. 출력 레이어의 경우 내결함성을 향상시키기 위해 상대적으로 완만한 기울기를 갖는 기능을 사용합니다. 히든 레이어의 내결함성을 향상시키기 위해 수렴 후 함수의 기울기를 가파르게 만듭니다. 문자 인식 문제에 대한 실험 결과는 우리의 NN이 오류 주입, 강제 가중치 제한 및 각 가중치와 출력 오류의 관련성 계산을 사용하는 알고리즘으로 훈련된 다른 NN보다 내결함성, 학습 주기 및 학습 시간이 우수하다는 것을 보여줍니다. 게다가 우리 알고리즘에 통합된 그래디언트 조작은 일반화 능력을 결코 손상시키지 않습니다.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Naotake KAMIURA, Yasuyuki TANIGUCHI, Yutaka HATA, Nobuyuki MATSUI, "A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E84-D, no. 7, pp. 899-905, July 2001, doi: .
Abstract: In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.
URL: https://global.ieice.org/en_transactions/information/10.1587/e84-d_7_899/_p
부
@ARTICLE{e84-d_7_899,
author={Naotake KAMIURA, Yasuyuki TANIGUCHI, Yutaka HATA, Nobuyuki MATSUI, },
journal={IEICE TRANSACTIONS on Information},
title={A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks},
year={2001},
volume={E84-D},
number={7},
pages={899-905},
abstract={In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.},
keywords={},
doi={},
ISSN={},
month={July},}
부
TY - JOUR
TI - A Learning Algorithm with Activation Function Manipulation for Fault Tolerant Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 899
EP - 905
AU - Naotake KAMIURA
AU - Yasuyuki TANIGUCHI
AU - Yutaka HATA
AU - Nobuyuki MATSUI
PY - 2001
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E84-D
IS - 7
JA - IEICE TRANSACTIONS on Information
Y1 - July 2001
AB - In this paper we propose a learning algorithm to enhance the fault tolerance of feedforward neural networks (NNs for short) by manipulating the gradient of sigmoid activation function of the neuron. We assume stuck-at-0 and stuck-at-1 faults of the connection link. For the output layer, we employ the function with the relatively gentle gradient to enhance its fault tolerance. For enhancing the fault tolerance of hidden layer, we steepen the gradient of function after convergence. The experimental results for a character recognition problem show that our NN is superior in fault tolerance, learning cycles and learning time to other NNs trained with the algorithms employing fault injection, forcible weight limit and the calculation of relevance of each weight to the output error. Besides the gradient manipulation incorporated in our algorithm never spoils the generalization ability.
ER -