| An introduction to modelling and learning algorithms | p. 1 |
| Introduction to modelling | p. 1 |
| Modelling, control and learning algorithms | p. 7 |
| The learning problem | p. 9 |
| Book philosophy and contents overview | p. 13 |
| Book overview | p. 13 |
| A historical perspective of adaptive modelling and control | p. 21 |
| Basic concepts of data-based modelling | p. 25 |
| Introduction | p. 25 |
| State-space models versus input-output models | p. 26 |
| Conversion of state-space models to input-output models | p. 26 |
| Conversion of input-output models to state-space models | p. 28 |
| Nonlinear modelling by basis function expansion | p. 29 |
| Model parameter estimation | p. 31 |
| Model quality, | p. 33 |
| The bias-variance dilemma | p. 33 |
| Bias-variance balance by model structure regularisation | p. 34 |
| Reproducing kernels and regularisation networks | p. 39 |
| Model selection methods | p. 42 |
| Model selection criteria | p. 43 |
| Model selection criteria sensitivity | p. 44 |
| Correlation tests | p. 45 |
| An example: time series modelling | p. 49 |
| Learning laws for linear-in-the-parameters networks | p. 53 |
| Introduction to learning | p. 53 |
| Error or performance surfaces | p. 55 |
| Batch learning laws | p. 58 |
| General learning laws | p. 58 |
| Gradient descent algorithms | p. 59 |
| Instantaneous learning laws | p. 61 |
| Least mean squares learning | p. 61 |
| Normalised least mean squares learning | p. 62 |
| NLMS weight convergence | p. 63 |
| Recursive least squares estimation | p. 67 |
| Gradient noise and normalised condition numbers | p. 68 |
| Adaptive learning rates | p. 70 |
| Fuzzy and neurofuzzy modelling | p. 71 |
| Introduction to fuzzy and neurofuzzy systems | p. 71 |
| Fuzzy systems | p. 74 |
| Fuzzy sets | p. 75 |
| Fuzzy operators | p. 83 |
| Fuzzy relation surfaces | p. 87 |
| Inferencing | p. 88 |
| Fuzzification and denazification | p. 89 |
| Functional mapping and neurofuzzy models | p. 91 |
| Takagi-Sugeno local neurofuzzy model | p. 95 |
| Neurofuzzy modelling examples | p. 97 |
| Thermistor modelling | p. 97 |
| Time series modelling | p. 99 |
| Parsimonious neurofuzzy modelling | p. 103 |
| Iterative construction modelling | p. 103 |
| Additive neurofuzzy modelling algorithms | p. 106 |
| Adaptive spline modelling algorithm (ASMOD) | p. 107 |
| ASMOD refinements | p. 107 |
| Illustrative examples of ASMOD | p. 1ll |
| Extended additive neurofuzzy models | p. 119 |
| Weight identification | p. 122 |
| Extended additive model structure identification | p. 124 |
| Hierarchical neurofuzzy models | p. 125 |
| Regularised neurofuzzy models | p. 129 |
| Bayesian regularisation | p. 129 |
| Error bars | p. 132 |
| Priors for neurofuzzy models | p. 133 |
| Local regularised neurofuzzy models | p. 136 |
| Complexity reduction through orthogonal least squares | p. 143 |
| A-optimality neurofuzzy model construction (Neudec) | p. 144 |
| Local neurofuzzy modelling | p. 153 |
| Introduction | p. 153 |
| Local orthogonal partitioning algorithms | p. 157 |
| k-d Trees | p. 157 |
| Quad-trees | p. 161 |
| Operating point dependent neurofuzzy models | p. 164 |
| State space representations of operating point dependent neurofuzzy models | p. 168 |
| Mixture of experts modelling | p. 173 |
| Multi-input-Multi-output (MIMO) modelling via input variable selection | p. 187 |
| MIMO NARX neurofuzzy model decomposition | p. 187 |
| Feedforward Gram-Schmidt OLS procedure for linear systems | p. 191 |
| Input variable selection via the modified Gram-Schmidt OLS for piecewise linear submodels | p. 193 |
| Delaunay input space partitioning modelling | p. 201 |
| Introduction | p. 201 |
| Delaunay triangulation of the input space | p. 202 |
| Delaunay input space partitioning for locally linear models | p. 204 |
| The Bézier-Bernstein modelling network | p. 209 |
| Neurofuzzy modelling using Bézier-Bernstein function for univariate term f(i)(x(i)) and bivariateterm f(i)(1)(j)(1)(x(i)(1),x(j)(1)) | p. 209 |
| The complete Bezier-Bernstein model construction algorithm | p. 219 |
| Numerical examples | p. 220 |
| Neurofuzzy linearisation modelling for nonlinear state estimation | p. 225 |
| Introduction to linearisation modelling | p. 225 |
| Neurofuzzy local linearisation and the MASMOD algorithm | p. 228 |
| A hybrid learning scheme combining MASMOD and EM algorithms for neurofuzzy local linearisation | p. 236 |
| Neurofuzzy feedback linearisation (NFFL) | p. 239 |
| Formulation of neurofuzzy state estimators | p. 245 |
| An example of nonlinear trajectory estimation | p. 249 |
| Multisensor data fusion using Kalman niters based on neurofuzzy linearisation | p. 255 |
| Introduction | p. 255 |
| Measurement fusion | p. 258 |
| Outputs augmented fusion (OAF) | p. 259 |
| Optimal weighting measurement fusion (OWMF) | p. 259 |
| On functional equivalence of OAF and OWMF | p. 260 |
| On the decentralised architecture | p. 262 |
| State-vector fusion | p. 263 |
| State-vector assimilation fusion (SVAF) | p. 263 |
| Track-to-track fusion (TTF) | p. 264 |
| On the decentralised architecture | p. 265 |
| Hierarchical multisensor data fusion - trade-off between centralised and decentralised Architectures | p. 266 |
| Simulation examples | p. 267 |
| On functional equivalence of two measurement fusion methods | p. 267 |
| On hierarchical multisensor data fusion | p. 271 |
| Support vector neurofuzzy models | p. 281 |
| Introduction | p. 281 |
| Support vector machines | p. 282 |
| Loss functions | p. 284 |
| Feature space and kernel functions | p. 284 |
| Support vector regression | p. 286 |
| Support vector neurofuzzy networks | p. 289 |
| SUPANOVA | p. 297 |
| A comparison among neural network models | p. 303 |
| Conclusions | p. 304 |
| References | p. 307 |
| Index | p. 319 |
| Table of Contents provided by Publisher. All Rights Reserved. |