
Strength or Accuracy
Credit Assignment in Learning Classifier Systems
By:Â Tim Kovacs
Hardcover | 20 January 2004
At a Glance
328 Pages
24.13 x 15.24 x 2.54
Hardcover
$249.00
or 4 interest-free payments of $62.25 with
 orÂShips in 5 to 7 business days
Industry Reviews
| Introduction | p. 1 |
| Two Example Machine Learning Tasks | p. 2 |
| Types of Task | p. 3 |
| Supervised and Reinforcement Learning | p. 3 |
| Sequential and Non-sequential Decision Tasks | p. 4 |
| Two Challenges for Classifier Systems | p. 4 |
| Problem 1: Learning a Policy from Reinforcement | p. 5 |
| Problem 2: Generalisation | p. 5 |
| Solution Methods | p. 6 |
| Method 1: Reinforcement Learning Algorithms | p. 6 |
| Method 2: Evolutionary Algorithms | p. 6 |
| Learning Classifier Systems | p. 6 |
| The Tripartite LCS Structure | p. 7 |
| LCS = Policy Learning + Generalisation | p. 7 |
| Credit Assignment in Classifier Systems | p. 8 |
| Strength and Accuracy-based Classifier Systems | p. 8 |
| About the Book | p. 9 |
| Why Compare Strength and Accuracy? | p. 10 |
| Are LCS EC- or RL-based? | p. 11 |
| Moving in Design Space | p. 14 |
| Structure of the Book | p. 16 |
| Learning Classifier Systems | p. 19 |
| Types of Classifier Systems | p. 21 |
| Michigan and Pittsburgh LCS | p. 21 |
| XCS and Traditional LCS? | p. 21 |
| Representing Rules | p. 22 |
| The Standard Ternary Language | p. 22 |
| Other Representations | p. 24 |
| Summary of Rule Representation | p. 25 |
| Notation for Rules | p. 25 |
| XCS | p. 25 |
| Wilson's Motivation for XCS | p. 26 |
| Overview of XCS | p. 27 |
| Wilson's Explore/Exploit Framework | p. 30 |
| The Performance System | p. 32 |
| The XCS Performance System Algorithm | p. 32 |
| The Match Set and Prediction Array | p. 32 |
| Action Selection | p. 34 |
| Experience-weighting of System Prediction | p. 34 |
| The Credit Assignment System | p. 35 |
| The MAM Technique | p. 35 |
| The Credit Assignment Algorithm | p. 36 |
| Sequential and Non-sequential Updates | p. 36 |
| Parameter Update Order | p. 37 |
| XCS Parameter Updates | p. 38 |
| The Rule Discovery System | p. 41 |
| Random Initial Populations | p. 41 |
| Covering | p. 41 |
| The Niche Genetic Algorithm | p. 43 |
| Alternative Mutation Schemes | p. 44 |
| Triggering the Niche GA | p. 45 |
| Deletion of Rules | p. 45 |
| Classifier Parameter Initialisation | p. 46 |
| Subsumption Deletion | p. 47 |
| SB XCS | p. 47 |
| Specification of SB-XCS | p. 48 |
| Comparison of SB-XCS and Other Strength LCS | p. 51 |
| Initial Tests of XCS and SB-XCS | p. 52 |
| The 6 Multiplexer | p. 52 |
| Woods2 | p. 55 |
| Summary | p. 60 |
| How Strength and Accuracy Differ | p. 63 |
| Thinking about Complex Systems | p. 63 |
| Holland's Rationale for CS-1 and his Later LCS | p. 65 |
| Schema Theory | p. 65 |
| The Bucket Brigade | p. 65 |
| Schema Theory + Bucket Brigade = Adaptation | p. 66 |
| Wilson's Rationale for XCS | p. 66 |
| A Bias towards Accurate Rules | p. 67 |
| A Bias towards General Rules | p. 67 |
| Complete Maps | p. 70 |
| Summary | p. 71 |
| A Rationale for SB-XCS | p. 71 |
| Analysis of Populations Evolved by XCS and SB-XCS | p. 71 |
| SB-XCS | p. 72 |
| XCS | p. 73 |
| Learning Rate | p. 76 |
| Different Goals, Different Representations | p. 76 |
| Default Hierarchies | p. 77 |
| Partial and Best Action Maps | p. 77 |
| Complete Maps | p. 79 |
| What do XCS and SB-XCS Really Learn? | p. 79 |
| Complete and Partial Maps Compared | p. 81 |
| Advantages of Partial Maps | p. 82 |
| Disadvantages of Partial Maps | p. 85 |
| Complete Maps and Strength | p. 91 |
| Contrasting Complete and Partial Maps in RL Terminology | p. 92 |
| Summary of Comparison | p. 92 |
| Ability to Express Generalisations | p. 93 |
| Mapping Policies and Mapping Value Functions | p. 93 |
| Adapting the Accuracy Criterion | p. 94 |
| XCS-hard and SB-XCS-easy Functions | p. 95 |
| Summary of Generalisation and Efficiency | p. 95 |
| Summary | p. 96 |
| What Should a Classifier System Learn? | p. 97 |
| Representing Boolean Functions | p. 99 |
| Truth Tables | p. 99 |
| On-sets and Off-sets | p. 99 |
| Sigma Notation | p. 100 |
| Disjunctive Normal Form | p. 100 |
| Representing Functions with Sets of Rules | p. 100 |
| How Should a Classifier System Represent a Solution? | p. 101 |
| The Value of a Single Rule | p. 102 |
| The Value of a Set of Rules | p. 103 |
| Complete and Correct Representations | p. 103 |
| Minimal Representations | p. 105 |
| Non-overlapping Representations | p. 107 |
| Why XCS Prefers Non-overlapping Populations | p. 108 |
| Should we Prefer Non-overlapping Populations? | p. 109 |
| Optimal Rule Sets: [O]s | p. 110 |
| Conflicting Rules | p. 111 |
| Representation in XCS | p. 111 |
| How Should We Measure Performance? | p. 112 |
| Measures of Performance | p. 112 |
| Measures of Population State | p. 113 |
| Measuring Performance and Measuring State | p. 114 |
| New Population State Metrics | p. 117 |
| Testing XCS with %[PI] | p. 118 |
| Testing XCS with %[m-DNF] | p. 120 |
| Summary of Metrics and Properties | p. 121 |
| Summary | p. 122 |
| Prospects for Adaptation | p. 125 |
| Known Problems with Strength LCS | p. 127 |
| Methodology for Rule Type Analysis | p. 128 |
| Analysis of Rule Types | p. 130 |
| Correct and Incorrect Actions | p. 130 |
| Overgeneral Rules | p. 131 |
| Strong Overgeneral Rules | p. 136 |
| Fit Overgeneral Rules | p. 137 |
| Parallel Definitions of Strength and Fitness | p. 138 |
| When are Strong and Fit Overgenerals Possible? | p. 139 |
| Biases in the Reward Function are Relevant | p. 140 |
| Competition for Action Selection | p. 140 |
| Competition for Reproduction | p. 142 |
| Strong Overgenerals in XCS | p. 142 |
| Biases between Actions do not Produce Strong Overgenerals | p. 144 |
| Some Properties of Accuracy-based Fitness | p. 144 |
| Strong Overgenerals in SB-XCS | p. 146 |
| When are Strong Overgenerals Impossible in SB-XCS? | p. 148 |
| What Makes Strong Overgenerals Possible in SB-XCS? | p. 148 |
| Fit Overgenerals and the Survival of Rules under the GA | p. 150 |
| Comparison on an Unbiased Reward Function | p. 150 |
| Comparison on a Biased Reward Function | p. 150 |
| Discussion | p. 151 |
| Designing Strong and Fit Overgenerals for XCS | p. 152 |
| Biased Variance Functions | p. 153 |
| Empirical Results | p. 153 |
| Avoiding Fit Overgenerals | p. 155 |
| SB-XCS and Biased Variance Functions | p. 156 |
| Strong and Fit Undergeneral Rules | p. 156 |
| Why Bias the Reward Function? | p. 157 |
| Some State-actions are more Important than Others | p. 158 |
| A Rule Allocation Bias can Focus Resources | p. 158 |
| Rule Allocation Reconsidered | p. 159 |
| Knowing What Not to Do | p. 159 |
| Managing Exploration | p. 160 |
| Complete and Partial Maps Revisited | p. 161 |
| Alternatives to Biasing the Reward Function | p. 161 |
| Can SB-XCS Avoid Strong and Fit Overgenerals? | p. 162 |
| Sequential Tasks | p. 162 |
| The Need to Pass Values Back | p. 163 |
| The Need for Discounting | p. 164 |
| How Q-functions become Biased | p. 165 |
| Examples | p. 165 |
| Woods2 Revisited | p. 166 |
| When Will the Value Function be Unbiased? | p. 173 |
| What Tasks can we Solve with SB-XCS? | p. 174 |
| Extensions | p. 175 |
| Fitness Sharing | p. 175 |
| Other Factors Contributing to Strong Overgenerals | p. 176 |
| Qualitative and Quantitative Approaches | p. 177 |
| Summary | p. 178 |
| Classifier Systems and Q-learning | p. 179 |
| Classifier Systems and Q-learning | p. 180 |
| Q-learning in Classifier Systems | p. 180 |
| Is it Really Q-learning? | p. 181 |
| XCS is a Proper Generalisation of Tabular Q-learning | p. 182 |
| Summary | p. 182 |
| The GA-view and RL-view Revisited | p. 183 |
| How SB-XCS Determines Policies | p. 183 |
| How XCS Determines Policies | p. 184 |
| Three Approaches to Determining a Policy | p. 185 |
| The GA-view and the RL-view | p. 185 |
| Combining Evolution and Q-learning | p. 186 |
| XCS is Closer to Tabular Q-learning than to SB-XCS | p. 188 |
| Summary | p. 189 |
| Conclusion | p. 191 |
| The Capacities of Various Types of LCS | p. 191 |
| Contributions | p. 192 |
| The Take-home Message | p. 195 |
| Open Problems and Future Work | p. 197 |
| Fitness Sharing and Strength-based Fitness | p. 197 |
| Further Study of Accuracy-based Fitness | p. 197 |
| Concluding Remarks | p. 198 |
| The Moral of the Story: The Need for a Complex Systems Design Methodology | p. 198 |
| Classifier Systems and Reinforcement Learning | p. 199 |
| The Future | p. 200 |
| Appendices | |
| Evaluation of Macro classifiers | p. 201 |
| Example XCS Cycle | p. 203 |
| The Performance System Algorithm | p. 204 |
| The Credit Assignment Algorithm | p. 206 |
| The Rule Discovery Algorithm | p. 209 |
| Learning from Reinforcement | p. 213 |
| Three Learning Paradigms | p. 214 |
| Supervised Learning | p. 214 |
| Reinforcement Learning | p. 215 |
| Unsupervised Learning | p. 216 |
| The Explore/Exploit Dilemma: a Feature of RL | p. 216 |
| Sequential and Non-sequential Tasks | p. 218 |
| Immediate Reward and Long-term Value | p. 219 |
| Sequential Decisions Imply RL | p. 219 |
| Episodic and Continuing Tasks | p. 220 |
| The Agent's Goal: Maximising Return | p. 220 |
| Return and Reward | p. 220 |
| Sequential Formulations of Return | p. 221 |
| Formalising RL Tasks | p. 222 |
| Environment | p. 222 |
| Learning Agent | p. 223 |
| Agent-environment Interaction | p. 224 |
| Summary | p. 225 |
| Generalisation Problems | p. 227 |
| Why Generalise? | p. 228 |
| The Curse of Dimensionality | p. 228 |
| The Need for Generalisation | p. 228 |
| Generalisation in RL | p. 229 |
| Generalising Over Policies and Value Functions | p. 229 |
| State Aggregation | p. 230 |
| State Space and Generalisation Space | p. 230 |
| Summary | p. 230 |
| Value Estimation Algorithms | p. 233 |
| The Value of State-actions | p. 234 |
| Non-sequential RL: Estimating Reward Functions | p. 235 |
| The Value of State-actions in Non-sequential Tasks | p. 235 |
| Estimating Expectations with Sample Averages | p. 235 |
| Incremental Updates | p. 236 |
| A General Form of Incremental Update | p. 237 |
| Setting StepSize in Incremental Updates | p. 237 |
| A Prediction Algorithm for Non-sequential RL | p. 238 |
| Sequential RL: Estimating Long-term Value Functions | p. 238 |
| Long-term Value Functions | p. 239 |
| The Value of State-actions in Sequential Tasks | p. 241 |
| The Value of a Policy | p. 241 |
| Estimating Values with Monte Carlo Methods | p. 242 |
| Estimating Values with Temporal Difference Methods | p. 243 |
| Russell and Norvig's Maze: A Sequential RL Task | p. 245 |
| Summary of Sequential RL | p. 246 |
| State Aggregation | p. 246 |
| Fixed and Adaptive Aggregation Schemes | p. 246 |
| The Value of Aggregations I: Return | p. 247 |
| The Value of Aggregations II: Predictive Utility | p. 248 |
| Storing Value Estimates | p. 249 |
| Storing Estimates of Aggregations | p. 249 |
| Sparse Estimators, Models and Search | p. 250 |
| Function Approximators | p. 250 |
| Summary | p. 250 |
| Generalised Policy Iteration Algorithms | p. 251 |
| Policy Improvement | p. 252 |
| Optimal Policies | p. 253 |
| Generalised Policy Iteration | p. 253 |
| How Well must we Evaluate a Policy? | p. 254 |
| Convergence Properties of GPI Control Algorithms | p. 255 |
| Initialising Value Functions | p. 255 |
| What Characterises GPI Algorithms? | p. 255 |
| State-value Functions | p. 255 |
| Summary | p. 256 |
| Evolutionary Algorithms | p. 257 |
| Evolution | p. 258 |
| Elements of EAs | p. 260 |
| A Generic EA | p. 260 |
| Population-based Search | p. 261 |
| Fitness Functions | p. 261 |
| Probabilistic Selection of Parents | p. 261 |
| Genetic Operators | p. 262 |
| Replacement | p. 263 |
| EAs as Search | p. 263 |
| Local and Global Optima | p. 264 |
| The Generalisation Problem | p. 264 |
| Niching and Mating Restriction | p. 265 |
| Fitness Sharing | p. 266 |
| Crowding | p. 267 |
| Mating Restriction | p. 267 |
| RL with EAs | p. 267 |
| Non-associative RL with an EA | p. 268 |
| Associative RL with an EA | p. 269 |
| Sequential RL with an EA | p. 271 |
| Comparing GPI and EA Methods for RL | p. 272 |
| Similarities between GPI and EA Methods | p. 272 |
| Summary | p. 273 |
| The Origins of Sarsa | p. 275 |
| Modified Connectionist Q-learning | p. 276 |
| ZCS's Implicit Bucket Brigade | p. 276 |
| Who Invented Sarsa? | p. 277 |
| Notation | p. 279 |
| References | p. 283 |
| Index | p. 303 |
| Table of Contents provided by Publisher. All Rights Reserved. |
ISBN: 9781852337704
ISBN-10: 1852337702
Series: Distinguished Dissertations
Published: 20th January 2004
Format: Hardcover
Language: English
Number of Pages: 328
Audience: Professional and Scholarly
Publisher: Springer Nature B.V.
Country of Publication: GB
Dimensions (cm): 24.13 x 15.24 x 2.54
Weight (kg): 0.64
Shipping
| Standard Shipping | Express Shipping | |
|---|---|---|
| Metro postcodes: | $9.99 | $14.95 |
| Regional postcodes: | $9.99 | $14.95 |
| Rural postcodes: | $9.99 | $14.95 |
Orders over $79.00 qualify for free shipping.
How to return your order
At Booktopia, we offer hassle-free returns in accordance with our returns policy. If you wish to return an item, please get in touch with Booktopia Customer Care.
Additional postage charges may be applicable.
Defective items
If there is a problem with any of the items received for your order then the Booktopia Customer Care team is ready to assist you.
For more info please visit our Help Centre.
You Can Find This Book In
This product is categorised by
- Non-FictionComputing & I.T.Computer ScienceMathematical Theory of Computation
- Non-FictionComputing & I.T.DatabasesData Capture & Analysis
- Non-FictionComputing & I.T.Computer ScienceArtificial IntelligenceMachine Learning
- Non-FictionComputing & I.T.Computer Programming & Software DevelopmentAlgorithms & Data Structures
























