| Figures | |
| Tables | |
| Preface | |
| Natural Computation | |
| Introduction | |
| The Brain | |
| Subsystems | |
| Maps | |
| Neurons | |
| Computational Theory | |
| Elements Of Natural Computation | |
| Minimum Description Length | |
| Example 1: A Program That Prints 10,000 Ones | |
| Example 2: A Neuron's Receptive Field | |
| Learning | |
| Architectures | |
| Constraints of Time and Space | |
| Cognitive Hierarchies | |
| Overview | |
| Core Concepts | |
| Learning to React: Memories | |
| Learning During a Lifetime: Programs | |
| Learning Across Generations: Architectures | |
| The Grand Challenge | |
| Notes | |
| Exercises | |
| Core Concepts | |
| Fitness | |
| Introduction | |
| Bayes' Rule | |
| Example: Vision Test | |
| Probability Distributions | |
| Discrete Distributions | |
| Binomial Distribution | |
| Poisson Distribution | |
| Continuous Distributions | |
| Normal Distribution | |
| Gaussian Approximation to a Binomial Distribution | |
| Example | |
| Information Theory | |
| Information Content and Channel Capacity | |
| Entropy | |
| Reversible Codes | |
| Irreversible Codes | |
| Classification | |
| Minimum Description Length | |
| Example: Image Coding | |
| Appendix: Laws of Probability | |
| Example | |
| Notes | |
| Exercises | |
| Programs | |
| Introduction | |
| Heuristic Search | |
| The Eight-Puzzle | |
| Two-Person Games | |
| Minimax | |
| Alpha and Beta Cutoffs | |
| Biological State Spaces | |
| Notes | |
| Exercises | |
| Data | |
| Data Compression | |
| Coordinate Systems | |
| Eigenvalues And Eigenvectors | |
| Eigenvalues of Positive Matrices | |
| Random Vectors | |
| Normal Distribution | |
| Eigenvalues and Eigenvectors of the Covariance Matrix | |
| High-Dimensional Spaces | |
| Clustering | |
| Appendix: Linear Algebra Review | |
| Notes | |
| Exercises | |
| Dynamics | |
| Overview | |
| Linear Systems | |
| The General Case | |
| Intuitive Meaning of Eigenvalues and Eigenvectors | |
| Nonlinear Systems | |
| Linearizing a Nonlinear System | |
| Lyapunov Stability | |
| Appendix: Taylor Series | |
| Notes | |
| Exercises | |
| Optimization | |
| Introduction | |
| Minimization Algorithms | |
| The Method of Lagrange Multipliers | |
| Optimal Control | |
| The Euler-Lagrange Method | |
| Dynamic Programming | |
| Notes | |
| Exercises | |
| Memories | |
| The Cortex As A Hierarchical Memory | |
| Neural Network Models | |
| Content-Addressable Memory | |
| Supervised Learning | |
| Unsupervised Learning | |
| Notes | |
| Content-Addressable Memory | |
| Introduction | |
| Hopfield Memories | |
| Stability | |
| Lyapunov Stability | |
| Kanerva Memories | |
| Implementation | |
| Performance of Kanerva Memories | |
| Implementations of Kanerva Memories | |
| Radial Basis Functions | |
| Kalman Filtering | |
| Notes | |
| Exercises | |
| Supervised Learning | |
| Introduction | |
| Perceptrons | |
| Continuous Activation Functions | |
| Unpacking the Notation | |
| Generating the Solution | |
| Recurrent Networks | |
| Minimum Description Length | |
| The Activation Function | |
| Maximum Likelihood with Gaussian Errors | |
| Error Functions | |
| Notes | |
| Exercises | |
| Unsupervised Learning | |
| Introduction | |
| Principal Components | |
| Competitive Learning | |
| Topological Constraints | |
| The Traveling Salesman Example | |
| Natural Topologies | |
| Supervised Competitive Learning | |
| Multimodal Data | |
| Initial Labeling Algorithm | |
| Minimizing Disagreement | |
| Independent Components | |
| Notes | |
| Exercises | |
| Programs | |
| Brain Subsystems That Use Chemical Rewards | |
| The Role of Rewards | |
| System Integration | |
| Learning Models | |
| Markov Systems | |
| Reinforcement Learning | |
| Notes | |
| Markov Models | |
| Introduction | |
| Markov Models | |
| Regular Chains | |
| Nonregular Chains | |
| Hidden Markov Models | |
| Formal Definitions | |
| Three Principal Problems | |
| The Probability of an Observation Sequence | |
| Most Probable States | |
| Improving the Model | |
| Note | |
| Exercises | |
| Reinforcement Learning | |
| Introduction | |
| Markov Decision Process | |
| The Core Idea: Policy Improvement | |
| Q-Learning | |
| Temporal-Difference-Learning | |
| Learning With A Teacher | |
| Partially Observable MDPs | |
| Avoiding Bad States | |
| Learning State Information from Temporal Sequences | |
| Distiguishing the Value of States | |
| Summary | |
| Notes | |
| Exercises | |
| Systems | |
| Gene Primer | |
| Learning Across Generations: Systems | |
| Standard Genetic Algorithms | |
| Genetic Programming | |
| Notes | |
| Genetic Algorithms | |
| Introduction | |
| Genetic Operators | |
| An Example | |
| Schemata | |
| Schemata Theorem | |
| The Bandit Problem | |
| Determining Fitness | |
| Racing for Fitness | |
| Coevolution of Parasites | |
| Notes | |
| Exercises | |
| Genetic Programming | |
| Introduction | |
| Genetic Operators For Programs | |
| Genetic Programming | |
| Analysis | |
| Modules | |
| Testing for a Module Function | |
| When to Diversify | |
| Summary | |
| Notes | |
| Exercises | |
| Summary | |
| Learning To React: Memories | |
| Learning During A Lifetime: Programs | |
| Learning Across Generations: Systems | |
| The Grand Challenge Revisited | |
| Note | |
| Index | |
| Table of Contents provided by Publisher. All Rights Reserved. |