| Foreword | p. ix |
| Preface | p. x |
| The framework of learning | p. 1 |
| Introduction | p. 1 |
| A formal setting | p. 5 |
| Hypothesis spaces and target functions | p. 9 |
| Sample, approximation, and generalization errors | p. 11 |
| The bias-variance problem | p. 13 |
| The remainder of this book | p. 14 |
| References and additional remarks | p. 15 |
| Basic hypothesis spaces | p. 17 |
| First examples of hypothesis space | p. 17 |
| Reminders I | p. 18 |
| Hypothesis spaces associated with Sobolev spaces | p. 21 |
| Reproducing Kernel Hilbert Spaces | p. 22 |
| Some Mercer kernels | p. 24 |
| Hypothesis spaces associated with an RKHS | p. 31 |
| Reminders II | p. 33 |
| On the computation of empirical target functions | p. 34 |
| References and additional remarks | p. 35 |
| Estimating the sample error | p. 37 |
| Exponential inequalities in probability | p. 37 |
| Uniform estimates on the defect | p. 43 |
| Estimating the sample error | p. 44 |
| Convex hypothesis spaces | p. 46 |
| References and additional remarks | p. 49 |
| Polynomial decay of the approximation error | p. 54 |
| Reminders III | p. 55 |
| Operators defined by a kernel | p. 56 |
| Mercer's theorem | p. 59 |
| RKHSs revisited | p. 61 |
| Characterizing the approximation error in RKHSs | p. 63 |
| An example | p. 68 |
| References and additional remarks | p. 69 |
| Estimating covering numbers | p. 72 |
| Reminders IV | p. 73 |
| Covering numbers for Sobolev smooth kernels | p. 76 |
| Covering numbers for analytic kernels | p. 83 |
| Lower bounds for covering numbers | p. 101 |
| On the smoothness of box spline kernels | p. 106 |
| References and additional remarks | p. 108 |
| Logarithmic decay of the approximation error | p. 109 |
| Polynomial decay of the approximation error C[infinity]for kernels | p. 110 |
| Measuring the regularity of the kernel | p. 112 |
| Estimating the approximation error in RKHSs | p. 117 |
| Proof of Theorem 6.1 | p. 125 |
| References and additional remarks | p. 125 |
| On the bias-variance problem | p. 127 |
| A useful lemma | p. 128 |
| Proof of Theorem 7.1 | p. 129 |
| A concrete example of bias-variance | p. 132 |
| References and additional remarks | p. 133 |
| Least squares regularization | p. 134 |
| Bounds for the regularized error | p. 135 |
| On the existence of target functions | p. 139 |
| A first estimate for the excess generalization error | p. 140 |
| Proof of Theorem 8.1 | p. 148 |
| Reminders V | p. 151 |
| Compactness and regularization | p. 151 |
| References and additional remarks | p. 155 |
| Support vector machines for classification | p. 157 |
| Binary classifiers | p. 159 |
| Regularized classifiers | p. 161 |
| Optimal hyperplanes: the separable case | p. 166 |
| Support vector machines | p. 169 |
| Optimal hyperplanes: the nonseparable case | p. 171 |
| Error analysis for separable measures | p. 173 |
| Weakly separable measures | p. 182 |
| References and additional remarks | p. 185 |
| General regularized classifiers | p. 187 |
| Bounding the misclassification error in terms of the generalization error | p. 189 |
| Projection and error decomposition | p. 194 |
| Bounds for the regularized error D([gamma],[pi]of f[subscript gamma] | p. 196 |
| Bounds for the sample error term involving f[subscript gamma] | p. 198 |
| Bounds for the sample error term involving f[superscript pi][subscript z,gamma] | p. 201 |
| Stronger error bounds | p. 204 |
| Improving learning rates by imposing noise conditions | p. 210 |
| References and additional remarks | p. 211 |
| References | p. 214 |
| Index | p. 111 |
| Table of Contents provided by Ingram. All Rights Reserved. |