Change of Representation and Inductive Bias One of the most important emerging concerns of machine learning researchers is the dependence of their learning programs on the underlying representations, especially on the languages used to describe hypotheses. The effectiveness of learning algorithms is very sensitive to this choice of language; choosing too large a language permits too many possible hypotheses for a program to consider, precluding effective learning, but choosing too small a language can prohibit a program from being able to find acceptable hypotheses. This dependence is not just a pitfall, however; it is also an opportunity. The work of Saul Amarel over the past two decades has demonstrated the effectiveness of representational shift as a problem-solving technique. An increasing number of machine learning researchers are building programs that learn to alter their language to improve their effectiveness.
At the Fourth Machine Learning Workshop held in June, 1987, at the University of California at Irvine, it became clear that the both the machine learning community and the number of topics it addresses had grown so large that the representation issue could not be discussed in sufficient depth. A number of attendees were particularly interested in the related topics of constructive induction, problem reformulation, representation selection, and multiple levels of abstraction. Rob Holte, Larry Rendell, and I decided to hold a workshop in 1988 to discuss these topics. To keep this workshop small, we decided that participation be by invitation only.
Decompiling Problem-Solving Experience to Elucidate Representational Distinctions.- Improving Problem Solving Performance by Example Guided Reformulation of Knowledge.- STRATA: Problem Reformulation and Abstract Data Types.- Abstracting First-Order Theories.- A Theory of Abstraction for Hierarchical Planning.- Automating Problem Reformulation.- An Introduction to the Decomposition of Task Representations in Autonomous Systems.- A Theory of Justified Reformulations.- Representation Engineering and Category Theory.- Similarities in Problem Solving Strategies.- Constraint Incorporation and the Structure Mismatch Problem.- Knowledge as Bias.- Efficient Candidate Elimination Through Test Incorporation.- Comparing Instance-Averaging with Instance-Saving Learning Algorithms.- A Logical Model of Machine Learning: A Study of Vague Predicates.- Declarative Bias: An Overview.- Semantic Equivalence in Concept Discovery.- Feature Construction for Concept Learning.
Series: Kluwer International Series in Engineering & Computer Science
Number Of Pages: 356
Published: 31st December 1989
Publisher: SPRINGER VERLAG GMBH
Country of Publication: NL
Dimensions (cm): 23.39 x 15.6
Weight (kg): 0.69