The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
신경망을 실제 응용에 적용할 때 좋은 성능을 위해서는 학습 과정이 필수적입니다. 역전파 알고리즘은 대부분의 신경망에서 널리 사용되는 잘 알려진 학습 방법입니다. 그러나 역전파 알고리즘은 시간이 많이 걸리기 때문에 프로세스 속도를 높이기 위해 많은 연구가 수행되었습니다. 역전파보다 더 효율적인 것으로 보이는 블록 역전파 알고리즘은 최근 Coetzee에 의해 [2]에서 제안되었습니다. 본 논문에서는 메시 연결 병렬 컴퓨터 시스템에서 블록 역전파 방법을 위한 효율적인 병렬 알고리즘과 그 성능 모델을 제안한다. 제안하는 알고리즘은 가중치 브로드캐스팅을 위한 마스터-슬레이브 모델과 가중치 계산을 위한 데이터 병렬화를 채택한다. 우리의 성능 모델을 검증하기 위해 메시 토폴로지로 연결된 32개의 변환기로 구성된 프로토타입 병렬 기계인 TiME에서 인쇄된 문자 인식 애플리케이션을 위한 신경망을 구현했습니다. 우리의 성능 모델에 의한 속도 향상은 실험에 의한 속도 향상과 매우 유사하다는 것을 보여줍니다.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Han-Wook LEE, Chan-Ik PARK, "An Efficient Parallel Block Backpropagation Learning Algorithm in Transputer-Based Mesh-Connected Parallel Computers" in IEICE TRANSACTIONS on Information,
vol. E83-D, no. 8, pp. 1622-1630, August 2000, doi: .
Abstract: Learning process is essential for good performance when a neural network is applied to a practical application. The backpropagation algorithm is a well-known learning method widely used in most neural networks. However, since the backpropagation algorithm is time-consuming, much research have been done to speed up the process. The block backpropagation algorithm, which seems to be more efficient than the backpropagation, is recently proposed by Coetzee in [2]. In this paper, we propose an efficient parallel algorithm for the block backpropagation method and its performance model in mesh-connected parallel computer systems. The proposed algorithm adopts master-slave model for weight broadcasting and data parallelism for computation of weights. In order to validate our performance model, a neural network is implemented for printed character recognition application in the TiME which is a prototype parallel machine consisting of 32 transputers connected in mesh topology. It is shown that speedup by our performance model is very close to that by experiments.
URL: https://global.ieice.org/en_transactions/information/10.1587/e83-d_8_1622/_p
부
@ARTICLE{e83-d_8_1622,
author={Han-Wook LEE, Chan-Ik PARK, },
journal={IEICE TRANSACTIONS on Information},
title={An Efficient Parallel Block Backpropagation Learning Algorithm in Transputer-Based Mesh-Connected Parallel Computers},
year={2000},
volume={E83-D},
number={8},
pages={1622-1630},
abstract={Learning process is essential for good performance when a neural network is applied to a practical application. The backpropagation algorithm is a well-known learning method widely used in most neural networks. However, since the backpropagation algorithm is time-consuming, much research have been done to speed up the process. The block backpropagation algorithm, which seems to be more efficient than the backpropagation, is recently proposed by Coetzee in [2]. In this paper, we propose an efficient parallel algorithm for the block backpropagation method and its performance model in mesh-connected parallel computer systems. The proposed algorithm adopts master-slave model for weight broadcasting and data parallelism for computation of weights. In order to validate our performance model, a neural network is implemented for printed character recognition application in the TiME which is a prototype parallel machine consisting of 32 transputers connected in mesh topology. It is shown that speedup by our performance model is very close to that by experiments.},
keywords={},
doi={},
ISSN={},
month={August},}
부
TY - JOUR
TI - An Efficient Parallel Block Backpropagation Learning Algorithm in Transputer-Based Mesh-Connected Parallel Computers
T2 - IEICE TRANSACTIONS on Information
SP - 1622
EP - 1630
AU - Han-Wook LEE
AU - Chan-Ik PARK
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E83-D
IS - 8
JA - IEICE TRANSACTIONS on Information
Y1 - August 2000
AB - Learning process is essential for good performance when a neural network is applied to a practical application. The backpropagation algorithm is a well-known learning method widely used in most neural networks. However, since the backpropagation algorithm is time-consuming, much research have been done to speed up the process. The block backpropagation algorithm, which seems to be more efficient than the backpropagation, is recently proposed by Coetzee in [2]. In this paper, we propose an efficient parallel algorithm for the block backpropagation method and its performance model in mesh-connected parallel computer systems. The proposed algorithm adopts master-slave model for weight broadcasting and data parallelism for computation of weights. In order to validate our performance model, a neural network is implemented for printed character recognition application in the TiME which is a prototype parallel machine consisting of 32 transputers connected in mesh topology. It is shown that speedup by our performance model is very close to that by experiments.
ER -