+612 9045 4394
 
CHECKOUT
$7.95 Delivery per order to Australia and New Zealand
100% Australian owned
Over a hundred thousand in-stock titles ready to ship
Multiple Classifier Systems : First International Workshop, MCS 2000 Cagliari, Italy, June 21-23, 2000 Proceedings - Josef Kittler

Multiple Classifier Systems

First International Workshop, MCS 2000 Cagliari, Italy, June 21-23, 2000 Proceedings

By: Josef Kittler (Editor), Fabio Roli (Editor)

Paperback Published: 14th June 2000
ISBN: 9783540677048
Number Of Pages: 408

Share This Book:

Paperback

$158.39
or 4 easy payments of $39.60 with Learn more
Ships in 10 to 15 business days

Earn 317 Qantas Points
on this Book

Many theoretical and experimental studies have shown that a multiple classi?er system is an e?ective technique for reducing prediction errors [9,10,11,20,19]. These studies identify mainly three elements that characterize a set of cl- si?ers: -Therepresentationoftheinput(whateachindividualclassi?erreceivesby wayofinput). -Thearchitectureoftheindividualclassi?ers(algorithmsandparametri- tion). - The way to cause these classi?ers to take a decision together. Itcanbeassumedthatacombinationmethodise?cientifeachindividualcl- si?ermakeserrors'inadi?erentway', sothatitcanbeexpectedthatmostofthe classi?ers can correct the mistakes that an individual one does [1,19]. The term 'weak classi?ers' refers to classi?ers whose capacity has been reduced in some way so as to increase their prediction diversity. Either their internal architecture issimple(e.g., theyusemono-layerperceptronsinsteadofmoresophisticated neural networks), or they are prevented from using all the information available. Sinceeachclassi?erseesdi?erentsectionsofthelearningset, theerrorcorre- tion among them is reduced. It has been shown that the majority vote is the beststrategyiftheerrorsamongtheclassi?ersarenotcorrelated.Moreover, in real applications, the majority vote also appears to be as e?cient as more sophisticated decision rules [2,13]. Onemethodofgeneratingadiversesetofclassi?ersistoupsetsomeaspect ofthetraininginputofwhichtheclassi?erisrather unstable. In the present paper, westudytwodistinctwaystocreatesuchweakenedclassi?ers;i.e.learning set resampling (using the 'Bagging' approach [5]), and random feature subset selection (using 'MFS', a Multiple Feature Subsets approach [3]). Other recent and similar techniques are not discussed here but are also based on modi?cations to the training and/or the feature set [7,8,12,21].

Invited Papers
Ensemble Methods in Machine Learningp. 1r
Experiments with Classifier Combining Rulesp. 16
The 'Test and Select'Approach to Ensemble Combinationp. 30
A Survey of Sequential Combination of Word Recognizers in Handwritten Phrase Recognition at CEDARp. 45
Multiple Classifier Combination Methodologies for Different Output Levelsp. 52
Theoretical Issues
A Mathematically Rigorous Foundation for Supervised Learningp. 67
Classifier Combinations: Implementations and Theoretical Issuesp. 77
Some Results on Weakly Accurate Base Learners for Boosting Regression and Classificationp. 87
Multiple Classifier Fusion
Complexity of Classification Problems and Comparative Advantages of Combined Classifiersp. 97
Effectiveness of Error Correcting Output Codes in Multiclass Learning Problemsp. 107
Combining Fisher Linear Discriminants for Dissimilarity Representationp. 117
A Learning Method of Feature Selection for Rough Classificationp. 127
Analysis of a Fusion Method for Combining Marginal Classifiersp. 137
A Hybrid Projection Based and Radial Basis Function Architecturep. 147
Combining Multiple Classifiers in Probabilistic Neural Networksp. 157
Supervised Classifier Combination Through Generalized Additive Multi-modelp. 167
Dynamic Classifier Selectionp. 177
Bagging and Boosting
Boosting in Linear Discriminant Analysisp. 190
Different Ways of Weakening Decision Trees and Their Impact on Classification Accuracy of DT Combinationp. 200
Applying Boosting to Similarity Literals for Time Series Classificationp. 210
Boosting of Tree-Based Classifiers for Predicitve Risk Modeling in GISp. 220
Design of Multiple Classifier Systems
A New Evaluation Method for Expert Combination in Multi-expert System Designingp. 230
Diversity Between Neural Networks and Decision Trees for Building Multiple Classifier Systemsp. 240
Self-Organizing Decomposition of Functions in the Context of a Unified Framework for Multiple Classifier Systemsp. 250
Classifier Instability and Partitioningp. 260
Applications of Multiple Classifier Systems Remote-Sensing Data Analysis
A Hierarchical Multiclassifier System for Hyperspectral Data Analysisp. 270
Consensus Based Classification of Multisource Remote Sensing Datap. 280
Combining Parametric and Nonparametric Classifiers for an Unsupervised Updating of Land-Cover Mapsp. 290
A Multiple Self-Organizing Map Scheme for Remote Sensing Classificationp. 300
Document Analysis
Use of Lexicon Density in Evaluating Word Recognizersp. 310
A Multi-expert System for Dynamic Signature Verificationp. 320
A Cascaded Multiple Expert System for Verificationp. 330
Architecture for Classifier Combination Using Entropy Measuresp. 340
Miscellaneous Applications
Combining Fingerprint Classifiersp. 351
Statistical Sensor Calibration for Fusion of Different Classifiers in a Biometric Person Recognition Frameworkp. 362
A Modular Neuro-fuzzy Network for Musical Instruments Classificationp. 372
Classifier Combination for Grammar-Guided Sentence Recognitionp. 383
Shape Matching and Extraction by an Array of Figure-and-Ground Classifiersp. 393
Author Indexp. 403
Table of Contents provided by Publisher. All Rights Reserved.

ISBN: 9783540677048
ISBN-10: 3540677046
Series: Lecture Notes in Computer Science
Audience: General
Format: Paperback
Language: English
Number Of Pages: 408
Published: 14th June 2000
Publisher: Springer-Verlag Berlin and Heidelberg Gmbh & Co. Kg
Country of Publication: DE
Dimensions (cm): 23.39 x 15.6  x 2.18
Weight (kg): 0.59

Earn 317 Qantas Points
on this Book