Get Free Shipping on orders over $79
Neural Networks for Conditional Probability Estimation : Forecasting Beyond Point Predictions - Dirk Husmeier

Neural Networks for Conditional Probability Estimation

Forecasting Beyond Point Predictions

By: Dirk Husmeier

eText | 6 December 2012

At a Glance

eText


$84.99

or 4 interest-free payments of $21.25 with

 or 

Instant online reading in your Booktopia eTextbook Library *

Why choose an eTextbook?

Instant Access *

Purchase and read your book immediately

Read Aloud

Listen and follow along as Bookshelf reads to you

Study Tools

Built-in study tools like highlights and more

* eTextbooks are not downloadable to your eReader or an app and can be accessed via web browsers only. You must be connected to the internet and have no technical issues with your device or browser that could prevent the eTextbook from operating.
Conventional applications of neural networks usually predict a single value as a function of given inputs. In forecasting, for example, a standard objective is to predict the future value of some entity of interest on the basis of a time series of past measurements or observations. Typical training schemes aim to minimise the sum of squared deviations between predicted and actual values (the 'targets'), by which, ideally, the network learns the conditional mean of the target given the input. If the underlying conditional distribution is Gaus­ sian or at least unimodal, this may be a satisfactory approach. However, for a multimodal distribution, the conditional mean does not capture the relevant features of the system, and the prediction performance will, in general, be very poor. This calls for a more powerful and sophisticated model, which can learn the whole conditional probability distribution. Chapter 1 demonstrates that even for a deterministic system and 'be­ nign' Gaussian observational noise, the conditional distribution of a future observation, conditional on a set of past observations, can become strongly skewed and multimodal. In Chapter 2, a general neural network structure for modelling conditional probability densities is derived, and it is shown that a universal approximator for this extended task requires at least two hidden layers. A training scheme is developed from a maximum likelihood approach in Chapter 3, and the performance ofthis method is demonstrated on three stochastic time series in chapters 4 and 5.
on
Desktop
Tablet
Mobile

More in Computer Vision

Mastering Invideo AI - Robert Keyser

eBOOK

Mastering Chat GPT - Robert Keyser

eBOOK

Everyday AI - Guenter Schamel

eBOOK

$7.99

A I(M) Here to Stay - Jonathon Wetzel

eBOOK