Get Free Shipping on orders over $79
Model Predictive Control : Fundamentals and Practice - Jay H. Lee

Model Predictive Control

Fundamentals and Practice

By: Jay H. Lee, Niket S. Kaisare, Carlos E. Garcia

Hardcover | 7 July 2026 | Edition Number 1

At a Glance

Hardcover


RRP $248.95

$182.75

27%OFF

or 4 interest-free payments of $45.69 with

 or 

Available: 7th July 2026

Preorder. Will ship when available.

Contents

Acknowledgments

Preface

1 Introduction
1.1 What’s MPC?
1.2 Why MPC?
1.2.1 Economic Drivers of APC/MPC
1.2.2 Economic Advantages of MPC vs. Other Tools
1.3 Historical Overview
1.3.1 Early Computer Control
1.3.2 The Pioneers
1.3.3 Adoption Growth
1.4 Impact of MPC on Control Research
1.4.1 Early Theoretical Developments
1.4.2 State-Space Model Formulation and Stability Results
1.4.3 Other Theoretical Developments
1.4.4 Lessons Learned Along the MPC Journey
1.5 A Typical Industrial Control Problem
1.6 Organization of This Book
Exercises

2 Step Response Modeling and Identification
2.1 Linear Time Invariant Systems
2.2 Impulse / Step Response Models
2.2.1 Impulse Response Models
2.2.2 Step Response Models
2.3 Multi-Step Prediction
2.3.1 Recursive Multi-Step Prediction Based on FIR Model
2.3.2 Recursive Multi-Step Prediction Based on Step-Response Model
2.3.3 Multivariable Generalization
2.4 Examples
2.5 Identification
2.5.1 Settling Time
2.5.2 Sampling Time
2.5.3 Choice of the Input Signal for Experimental Identification
2.5.4 The Linear Least Squares Problem
2.5.5 Linear Least Squares Identification
Exercises

3 Dynamic Matrix Control – The Basic Algorithm
3.1 The Concept of Moving Horizon Control
3.2 Multi-Step Prediction
3.3 Objective Function
3.4 Constraints
3.4.1 Manipulated Variable Constraints
3.4.2 Manipulated Variable Rate Constraints
3.4.3 Output Variable Constraints
3.4.4 Combined Constraints
3.5 Quadratic Programming Solution of the Control Problem
3.5.1 Quadratic Programs
3.5.2 Formulation as a Quadratic Program
3.6 Implementation
3.6.1 Moving Horizon Algorithm
3.6.2 DMC Examples
3.6.3 Efficient Solutions to the QP
3.6.4 Proper Constraint Formulation
3.6.5 Choice of Horizon Length
3.6.6 Input Blocking
3.6.7 Filtering of the Feedback Signal
3.7 Examples: Analysis and Guidelines
3.7.1 Unconstrained SISO Systems
3.7.2 Constrained SISO Systems
3.7.3 MIMO System with Strong Gain Directionality
3.7.4 Constrained MIMO Systems
3.7.5 Conclusions and General Tuning Guidelines
3.8 Case Study: Control of “Shell Heavy Oil Fractionator” using Dynamic Matrix Control
3.8.1 Heavy Oil Fractionator: Background
3.8.2 Control Structure Description
Exercises

4 Dynamic Matrix Control – Extensions and Variations

4.1 Features Found in Other Industrial Algorithms
4.1.1 Reference Trajectories
4.1.2 Coincidence Points
4.1.3 The Funnel Approach
4.1.4 Use of Other Norms
4.1.5 Input Parameterization
4.1.6 Model Conditioning
4.1.7 Prioritization of CVs and MVs

4.2 Connection with Internal Model Control

4.3 Some Possible Enhancements to DMC
4.3.1 Closed-Loop Update of Model State
4.3.2 Integrating Dynamics
4.3.3 Noise Filter
4.3.4 Bi-Level Optimization
4.3.5 Product Property Estimation

Exercises

5 Linear Time Invariant System Models

5.1 Sampling and Reconstruction
5.1.1 Introduction to Digital Control
5.1.2 Sampling
5.1.3 Aliasing
5.1.4 Reconstruction

5.2 Introduction to z-transform

5.3 Transfer Function Models
5.3.1 Continuous Time
5.3.2 Discrete Time
5.3.3 Transfer Matrix
5.3.4 Converting Continuous Transfer Function to Discrete Transfer Function
5.3.5 Stability and Implications of Poles
5.3.6 Gain, Frequency Response

5.4 State-Space Model
5.4.1 Continuous Time
5.4.2 Discrete Time
5.4.3 Converting Continuous- to Discrete-Time System

5.5 Conversion Between Discrete-Time Models
5.5.1 Representing State-Space System as Transfer Function
5.5.2 Realization of Transfer Function as State-Space System
5.5.3 Impulse and Step Responses of State-Space System
5.5.4 Derivation of Transfer Matrix from Impulse Response
5.5.5 From Impulse / Step Response to State-Space Model

Exercises

6 Discrete-Time State Space Models

6.1 State-Coordinate Transformation

6.2 Stability
6.2.1 System Poles and Characteristic Equation
6.2.2 Stability
6.2.3 Lyapunov Equation

6.3 Controllability, Reachability, and Stabilizability
6.3.1 Definitions
6.3.2 Conditions for Reachability
6.3.3 Coordinate Transformation

6.4 Observability, Reconstructability, and Detectability
6.4.1 Definitions
6.4.2 Conditions for Observability
6.4.3 Coordinate Transformation

6.5 Kalman’s Decomposition and Minimal Realization
6.5.1 Kalman’s Decomposition
6.5.2 Minimal Realization

6.6 Disturbance Modeling
6.6.1 Linear Stochastic System Model for Stationary Processes
6.6.2 Stochastic System Models for Processes with Nonstationary Behavior
6.6.3 Models for Estimation and Control

Exercises

7 State Estimation

7.1 Linear Estimator Structure

7.2 Observer Pole Placement

7.3 Kalman Filter
7.3.1 Derivation of the Optimal Filter Gain Matrix
7.3.2 Correlated Noise Case
7.3.3 Stability of Kalman Filter

7.4 Extensions
7.4.1 Inferential Estimation
7.4.2 Non-stationary (Integrating) Noise
7.4.3 Time-Varying System
7.4.4 Periodically Time-Varying System
7.4.5 Measurement Delays

7.5 Least Squares Formulation of State Estimation
7.5.1 Batch Least Squares Formulation
7.5.2 Recursive Solution and Equivalence with Kalman Filter
7.5.3 Use of Moving Estimation Window

Exercises

8 Unconstrained Quadratic Optimal Control

8.1 Linear State Feedback Controller Design

8.2 Finite Horizon Quadratic Optimal Control
8.2.1 Open?Loop Optimal Solution via Least Squares
8.2.2 State Feedback Solution via Dynamic Programming
8.2.3 Comparison of the Two Approaches

8.3 Infinite Horizon Quadratic Optimal Control
8.3.1 Optimal State Feedback Law: Asymptotic Solution of the Finite Horizon Problem
8.3.2 Receding Horizon Implementation of the Finite Horizon Solution
8.3.3 Equivalence Between Finite and Infinite Horizon Problems

8.4 Analysis
8.4.1 State Feedback Case
8.4.2 Output Feedback Case
8.4.3 Setpoint Tracking and Disturbance Rejection

8.5 Stochastic LQ Control
8.5.1 Finite Horizon Problem
8.5.2 Output Feedback LQ Control

Exercises

9 Constrained Quadratic Optimal Control

9.1 Finite Horizon Problem

9.2 Infinite Horizon Problem
9.2.1 Options for Re?formulation as an Equivalent Finite?Horizon Problem
9.2.2 Comparison of Various Options

9.3 Constraint Softening

9.4 Derivation of an Explicit Form of the Optimal Control Law via Multi?Parametric Programming

9.5 Analysis
9.5.1 Stability Concepts and Lyapunov’s Direct Method
9.5.2 State Feedback Case
9.5.3 Output Feedback Case

9.6 Stochastic Case (*)

Exercises

10 System Identification

10.1 Problem Overview

10.2 Model Structures
10.2.1 Finite Impulse Response Model
10.2.2 Structures for Parametric Identification
10.2.3 Key Issues in Parametric Models

10.3 Parametric Identification Methods
10.3.1 Prediction Error Method
10.3.2 Properties of Linear Least Squares Identification
10.3.3 Persistency of Excitation
10.3.4 Frequency?Domain Bias Distribution Under PEM
10.3.5 Parameter Estimation via Statistical Methods ()
10.3.6 Other Methods (
)

10.4 Nonparametric Identification
10.4.1 Impulse Response Identification
10.4.2 Frequency Response Identification (*)

10.5 Subspace Identification
10.5.1 The Basic Method
10.5.2 Analysis and Discussion

10.6 Practice of System Identification: A User’s Perspective
10.6.1 Experiment Design
10.6.2 PRBS Signals
10.6.3 Data Pre?Processing
10.6.4 Model Fitting and Validation
10.6.5 Model Quality Assessment and an Integrated Framework

Exercises

11 Linear MPC: State Space Formulation

11.1 Motivation

11.2 Model Construction
11.2.1 Model Structure for State?Space MPC
11.2.2 Stochastic System Model with Output Disturbance Only
11.2.3 Stochastic System Model with State and Output Disturbances
11.2.4 Summary

11.3 Deterministic State Space MPC
11.3.1 State Regulation Problem
11.3.2 Constraints
11.3.3 Offset?Free Output Tracking and Regulation Problem

11.4 MPC with State Estimation
11.4.1 State Estimation Using Kalman Filter
11.4.2 Control Calculation Using State Estimate
11.4.3 MPC with Output Disturbance Only
11.4.4 MPC with State Disturbance Model
11.4.5 MPC with Full Disturbance Model
11.4.6 Tracking a Setpoint Trajectory
11.4.7 Constraint Softening

11.5 Inferential Control
11.5.1 Problem Formulation
11.5.2 Infrequent Primary Measurements
11.5.3 Handling Measurement Delays in Primary Measurements

11.6 Sequential Linearization?Based MPC (for Nonlinear Systems)
11.6.1 Model Construction
11.6.2 Extended Kalman Filter
11.6.3 Multi?Step Prediction
11.6.4 Objective Function and Constraints
11.6.5 Implementation of Sequential Linearization?Based MPC

Exercises

12 Nonlinear MPC

12.1 Introduction

12.2 NMPC Formulation

12.3 Solution via Nonlinear Programming
12.3.1 Elements of NLP Formulations
12.3.2 Nonlinear Programming Solvers

12.4 Stability and Other Properties
12.4.1 Invariant Set and Output Admissible Set
12.4.2 Cost?To?Go and Terminal Penalty
12.4.3 Establishing Closed?Loop Stability
12.4.4 Implementation: Quasi?Infinite Horizon MPC

12.5 Nonlinear State Estimation
12.5.1 Extended Kalman Filter
12.5.2 Moving Horizon Estimation for Nonlinear State Estimation

12.6 Case Study

12.7 Conclusions and Future Directions

Exercises

13 Repetitive MPC for Batch and Periodic Systems

13.1 Introduction
13.1.1 Historical Background

13.2 General Framework
13.2.1 Problem Formulation
13.2.2 Limitations of Conventional Feedback Control for Periodic Processes

13.3 Iterative Learning Model Predictive Control for Batch Systems
13.3.1 Notations
13.3.2 “Run?To?Run” IL?MPC Method for an Unconstrained System
13.3.3 “Run?To?Run” IL?MPC Method for a Constrained System
13.3.4 Real?Time?Feedback IL?MPC Method for an Unconstrained System
13.3.5 Real?Time?Feedback IL?MPC Method for the Constrained System

13.4 Repetitive Model Predictive Control for Continuous Systems with Periodic Operations
13.4.1 Notations
13.4.2 “Run?To?Run” R?MPC Method for an Unconstrained System
13.4.3 “Run?To?Run” R?MPC Methods for the Constrained System
13.4.4 Real?Time?Feedback?Based R?MPC Methods for the Unconstrained System
13.4.5 Real?Time?Feedback?Based R?MPC Methods for the Constrained System

13.5 Future Outlook

Exercises

Appendix A: Review of Linear Transformation

A.1 Vector Space
A.1.1 Definition
A.1.2 Dimension of a Vector Space
A.1.3 Linear Independence of Vectors
A.1.4 Basis
A.1.5 Subspace
A.1.6 Union, Intersection, Independence, and Internal Sum
A.1.7 Change of Basis

A.2 Linear Operator
A.2.1 Definition
A.2.2 Matrix Representation

A.2.3 Change of Basis for Linear Operators
A.2.4 Null Space and Image Space
A.2.5 Inverse Operator
A.2.6 Injection, Surjection, Bijection
A.2.7 Inner Product Space
A.2.8 Orthogonal Vectors and Orthonormal Basis
A.2.9 Change of Basis to Orthonormal Basis
A.2.10 Orthogonal Matrix
A.2.11 Projection, Orthogonal Projection

A.3 Matrix Algebra

A.3.1 Eigenvalues, Eigenvectors
A.3.2 Computing Eigenvalues and Eigenvectors
A.3.3 Jordan Decomposition and Its Applications
A.3.4 Singular Value Decomposition
A.3.5 Cayley?Hamilton Theorem
A.3.6 Matrix Function
A.3.7 Vector Norms
A.3.8 Matrix Norm
A.3.9 Positive (Negative) Definiteness and Semi?Definiteness

A.4 Exercises

B.1 Random Variables

B.1.1 Introduction
B.1.2 Basic Probability Concepts
B.1.3 Statistics

B.2 Stochastic Process

B.2.1 Basic Probability Concepts

C Model Reduction

C.1 Model Reduction Problem
C.2 Hankel Matrix and Hankel Singular Values
C.3 Balanced Realization and Truncation
C.4 Application to FIR Models

D.1 Kalman Filter as the Bayesian Estimator for Gaussian Systems

D.2 Moving Horizon Estimation: Recursive Solution to the Unconstrained Linear Problem
D.2.1 Dynamic Programming and Arrival Cost
D.2.2 Recursive Calculation of the Arrival Cost and One?Step?Ahead Prediction
D.2.3 Equivalence with the Kalman Filter

D.3 Stochastic State Feedback Problems
D.3.1 Open?Loop Optimal Solution via Least Squares
D.3.2 Optimal Feedback Policy via Dynamic Programming
D.3.3 Open?Loop Optimal Feedback Control vs. Optimal Feedback Control

D.4 Stochastic Output Feedback Problems
D.4.1 Optimal Output Feedback Controller
D.4.2 Derivation via Dynamic Programming
D.4.3 Extension to the Infinite Horizon Case: LQG Controller and Separation Principle
D.4.4 Analysis

E.1 Discrete Time Systems

E.2 The IMC Loop Structure and Properties
E.2.1 Stability
E.2.2 The Perfect Controller
E.2.3 Zero Offset
E.2.4 Robustness
E.2.5 IMC Feedforward Compensator Design
E.2.6 Saturation Constraints
E.2.7 IMC Tuning Method for PID Controllers
E.2.8 Analysis Tools

Exercises

F MPC Toolbox Tutorial: Shell Oil Fractionator

F.1 Problem Description
F.1.1 Background
F.1.2 Model Definition
F.1.3 Overview of Our Approach

F.2 Tutorial: Using the MPC GUI
F.2.1 Simplified Problem Definition
F.2.2 Using the MPC GUI

F.3 Solving the Shell Oil Control Problem Using the MPC Toolbox
F.3.1 Comparison of Control Structures
F.3.2 Specifying Target for MV

G A Brief Tutorial on Simulink

G.1 A MIMO System Example
G.2 Simulink Model for a CSTR

 

More in Industrial Chemistry & Manufacturing Technologies

Professional Beauty Therapy : 4th Edition - Australia and New Zealand - Lorraine Nordmann
Meathead : The Science of Great Barbecue and Grilling - Meathead Goldwyn
The Food Lab : Better Home Cooking Through Science - J. Kenji Lopez-Alt
Exactly : How Precision Engineers Created the Modern World - Simon Winchester
Circular Textile Economy - Christian Schindler

$228.99

Principles of Modern Grinding Technology - W. Brian  Rowe

RRP $436.95

$385.75

12%
OFF
Aflatoxin Management in Crops : Challenges and Opportunities - Antonio  Masi
Longitude - Dava Sobel

Paperback

RRP $22.99

$20.75

10%
OFF
Construction Skills : 3rd edition - Glenn P. Costin

RRP $97.95

$84.75

13%
OFF
Additions to Clay Bodies : New Ceramics : New Ceramics - Kathleen Standen