Rapid advances in neural sciences and VLSI design technologies have provided an excellent means to boost the computational capability and efficiency of data and signal processing tasks by several orders of magnitude. With massively parallel processing capabilities, artificial neural networks can be used to solve many engineering and scientific problems. Due to the optimized data communication structure for artificial intelligence applications, a neurocomputer is considered as the most promising sixth-generation computing machine. Typical applica- tions of artificial neural networks include associative memory, pattern classification, early vision processing, speech recognition, image data compression, and intelligent robot control. VLSI neural circuits play an important role in exploring and exploiting the rich properties of artificial neural networks by using pro- grammable synapses and gain-adjustable neurons. Basic building blocks of the analog VLSI neural networks consist of operational amplifiers as electronic neurons and synthesized resistors as electronic synapses.
The synapse weight information can be stored in the dynamically refreshed capacitors for medium-term storage or in the floating-gate of an EEPROM cell for long-term storage. The feedback path in the amplifier can continuously change the output neuron operation from the unity-gain configuration to a high-gain configuration. The adjustability of the vol- tage gain in the output neurons allows the implementation of hardware annealing in analog VLSI neural chips to find optimal solutions very efficiently. Both supervised learning and unsupervised learning can be implemented by using the programmable neural chips.
1. Introduction.- 1.1 Overview of Neural Architectures.- 1.2 VLSI Neural Network Design Methodology.- 2. VLSI Hopfield Networks.- 2.1 Circuit Dynamics of Hopfield Networks.- 2.2 Existence of Local Minima.- 2.3 Elimination of Local Minima.- 2.4 Neural-Based A/D Converter Without Local Minima.- 2.4.1 The Step Function Approach.- 2.4.2 The Correction Logic Approach.- 2.5 Traveling Salesman Problem.- 2.5.1 Competitive-Hopfield Network Approach.- 2.5.2 Search for Optimal Solution.- 3. Hardware Annealing Theory.- 3.1 Simulated Annealing in Software Computation.- 3.2 Hardware Annealing.- 3.2.1 Starting Voltage Gain of the Cooling Schedule.- 3.2.2 Final Voltage Gain of the Cooling Schedule.- 3.3 Application to the Neural-Based A/D Converter.- 3.3.1 Neuron Gain Requirement.- 3.3.2 Relaxed Gain Requirement Using Modified Synapse Weightings.- 4. Programmable Synapses and Gain-Adjustable Neurons.- 4.1 Compact and Programmable Neural Chips.- 4.2 Medium-Term and Long-Term Storage of Synapse Weight.- 4.2.1 DRAM-Style Weight Storage.- 4.2.2 EEPROM-Style Weight Storage.- 5. System Integration for VLSI Neurocomputing.- 5.1 System Module Using Programmable Neural Chip.- 5.2 Application Examples.- 5.2.1 Hopfield Neural-Based A/D Converter.- 5.2.2 Modified Hopfield Network for Image Restoration.- 6. Alternative VLSI Neural Chips.- 6.1 Neural Sensory Chips.- 6.2 Various Analog Neural Chips.- 6.2.1 Analog Neurons.- 6.2.2 Synapses with Fixed Weights.- 6.2.3 Programmable Synapses.- 6.3 Various Digital Neural Chips.- 7. Conclusions and Future Work.- Appendixes.
Series: Kluwer International Series in Engineering and Computer Science
Number Of Pages: 234
Published: December 2009
Country of Publication: NL
Dimensions (cm): 23.5 x 15.5
Weight (kg): 1.2