The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
본 논문에서는 간단하고 실리콘 면적 효율적인 온칩 학습 및 중량 저장을 위한 CMOS VLSI 회로 솔루션을 제안합니다. 특히 Random Weight Change라는 확률론적 학습 방식과 다중 안정 가중치 저장 접근 방식이 구현되었습니다. 또한 학습 정확도에 대한 기술 변화의 영향에 대한 문제가 논의됩니다. 학습 방식과 가중치 저장이 매우 일반적임에도 불구하고 이 논문에서는 아날로그 CMOS 회로로 구현하는 데 특히 적합한 Approximate Identity Neural Networks라는 네트워크 클래스를 참조할 것입니다. 테스트 수단으로 16개의 뉴런, 1.2개의 가중치, 온칩 학습 및 가중치 저장을 갖춘 소규모 네트워크가 XNUMXμm 이중 금속 CMOS 프로세스로 제작되었습니다.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
부
Massimo CONTI, Paolo CRIPPA, Giovanni GUAITINI, Simone ORCIONI, Claudio TURCHETTI, "An Analog CMOS Approximate Identity Neural Network with Stochastic Learning and Multilevel Weight Storage" in IEICE TRANSACTIONS on Fundamentals,
vol. E82-A, no. 7, pp. 1344-1357, July 1999, doi: .
Abstract: In this paper CMOS VLSI circuit solutions are suggested for on-chip learning and weight storage, which are simple and silicon area efficient. In particular a stochastic learning scheme, named Random Weight Change, and a multistable weight storage approach have been implemented. Additionally, the problems of the influence of technological variations on learning accuracy is discussed. Even though both the learning scheme and the weight storage are quite general, in the paper we will refer to a class of networks, named Approximate Identity Neural Networks, which are particularly suitable to be implemented with analog CMOS circuits. As a test vehicle a small network with four neurons, 16 weights, on chip learning and weight storage has been fabricated in a 1.2 µm double-metal CMOS process.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e82-a_7_1344/_p
부
@ARTICLE{e82-a_7_1344,
author={Massimo CONTI, Paolo CRIPPA, Giovanni GUAITINI, Simone ORCIONI, Claudio TURCHETTI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={An Analog CMOS Approximate Identity Neural Network with Stochastic Learning and Multilevel Weight Storage},
year={1999},
volume={E82-A},
number={7},
pages={1344-1357},
abstract={In this paper CMOS VLSI circuit solutions are suggested for on-chip learning and weight storage, which are simple and silicon area efficient. In particular a stochastic learning scheme, named Random Weight Change, and a multistable weight storage approach have been implemented. Additionally, the problems of the influence of technological variations on learning accuracy is discussed. Even though both the learning scheme and the weight storage are quite general, in the paper we will refer to a class of networks, named Approximate Identity Neural Networks, which are particularly suitable to be implemented with analog CMOS circuits. As a test vehicle a small network with four neurons, 16 weights, on chip learning and weight storage has been fabricated in a 1.2 µm double-metal CMOS process.},
keywords={},
doi={},
ISSN={},
month={July},}
부
TY - JOUR
TI - An Analog CMOS Approximate Identity Neural Network with Stochastic Learning and Multilevel Weight Storage
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 1344
EP - 1357
AU - Massimo CONTI
AU - Paolo CRIPPA
AU - Giovanni GUAITINI
AU - Simone ORCIONI
AU - Claudio TURCHETTI
PY - 1999
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E82-A
IS - 7
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - July 1999
AB - In this paper CMOS VLSI circuit solutions are suggested for on-chip learning and weight storage, which are simple and silicon area efficient. In particular a stochastic learning scheme, named Random Weight Change, and a multistable weight storage approach have been implemented. Additionally, the problems of the influence of technological variations on learning accuracy is discussed. Even though both the learning scheme and the weight storage are quite general, in the paper we will refer to a class of networks, named Approximate Identity Neural Networks, which are particularly suitable to be implemented with analog CMOS circuits. As a test vehicle a small network with four neurons, 16 weights, on chip learning and weight storage has been fabricated in a 1.2 µm double-metal CMOS process.
ER -