The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
기존의 역전파 알고리즘은 출력 기능을 하드 제한하는 단위의 네트워크에 적용할 수 없습니다. 왜냐하면 이러한 기능은 차별화될 수 없기 때문입니다. 본 논문에서는 하드 제한 출력 함수를 갖는 단위의 다층 피드포워드 네트워크를 훈련하는 데 적합한 경사하강법 알고리즘이 제시됩니다. 하드 제한 유닛에 대한 미분 가능한 출력 함수를 얻기 위해, 우리는 그러한 네트워크에서 유닛의 편향이 매끄러운 분포 함수를 갖는 랜덤 변수인 경우 유닛의 출력이 특정 상태에 있을 확률은 다음과 같다는 점을 활용했습니다. 장치 입력의 연속 미분 기능. 세 가지 시뮬레이션 결과가 제시되었으며, 이는 이 알고리즘의 성능이 기존 역전파의 성능과 유사함을 보여줍니다.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Hongbing ZHU, Kei EGUCHI, Toru TABATA, "A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias" in IEICE TRANSACTIONS on Fundamentals,
vol. E83-A, no. 6, pp. 1040-1048, June 2000, doi: .
Abstract: The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e83-a_6_1040/_p
부
@ARTICLE{e83-a_6_1040,
author={Hongbing ZHU, Kei EGUCHI, Toru TABATA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias},
year={2000},
volume={E83-A},
number={6},
pages={1040-1048},
abstract={The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.},
keywords={},
doi={},
ISSN={},
month={June},}
부
TY - JOUR
TI - A Training Algorithm for Multilayer Neural Networks of Hard-Limiting Units with Random Bias
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1040
EP - 1048
AU - Hongbing ZHU
AU - Kei EGUCHI
AU - Toru TABATA
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E83-A
IS - 6
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - June 2000
AB - The conventional back-propagation algorithm cannot be applied to networks of units having hard-limiting output functions, because these functions cannot be differentiated. In this paper, a gradient descent algorithm suitable for training multilayer feedforward networks of units having hard-limiting output functions, is presented. In order to get a differentiable output function for a hard-limiting unit, we utilized that if the bias of a unit in such a network is a random variable with smooth distribution function, the probability of the unit's output being in a particular state is a continuously differentiable function of the unit's inputs. Three simulation results are given, which show that the performance of this algorithm is similar to that of the conventional back-propagation.
ER -