12Week 11 — Bayesian Time Series and State-Space Models
This week introduces Bayesian approaches to time series analysis and state-space modeling, which unify filtering, forecasting, and dynamic parameter estimation under a probabilistic framework.
We study both classical dynamic linear models (DLMs) and modern Bayesian filtering methods.
12.1 Learning Goals
By the end of this week, you should be able to:
Formulate Bayesian dynamic models for time series data.
Apply Bayesian updating and filtering for sequential data.
Implement simple state-space and autoregressive models in R.
Interpret uncertainty propagation over time.
12.2 Lecture 1 — Dynamic Linear Models (DLMs)
12.2.1 1.1 Motivation
Time series exhibit temporal dependence.
Dynamic models describe how latent states evolve over time and how observations depend on those states: \[
\text{State equation: } \theta_t = G_t \theta_{t-1} + \omega_t, \quad \omega_t \sim N(0,W_t),
\]\[
\text{Observation equation: } y_t = F_t^\top \theta_t + \nu_t, \quad \nu_t \sim N(0,V_t).
\]
Here,
- \(begin:math:text\) \theta_t \(end:math:text\): latent state vector,
- \(begin:math:text\) y_t \(end:math:text\): observed data,
- \(begin:math:text\) G_t, F_t \(end:math:text\): known system matrices,
- \(begin:math:text\) W_t, V_t \(end:math:text\): process and observation noise covariances.
12.2.2 1.2 Bayesian Updating
Given data up to time \(begin:math:text\)t-1\(end:math:text\), the prior for \(begin:math:text\)\theta_t\(end:math:text\) is: \[
p(\theta_t \mid y_{1:t-1}) = N(a_t, R_t),
\] where \(begin:math:text\) a_t = G_t m_{t-1} \(end:math:text\), \(begin:math:text\) R_t = G_t C_{t-1} G_t^\top + W_t \(end:math:text\).
After observing \(begin:math:text\) y_t \(end:math:text\): \[
p(\theta_t \mid y_{1:t}) = N(m_t, C_t),
\] where \(begin:math:text\) m_t = a_t + A_t (y_t - F_t^\top a_t) \(end:math:text\), \(begin:math:text\) C_t = R_t - A_t F_t^\top R_t \(end:math:text\),
and \(begin:math:text\) A_t = R_t F_t (F_t^\top R_t F_t + V_t)^{-1} \(end:math:text\) is the Kalman gain.
We estimate the evolving state mean \(begin:math:text\) m_t \(end:math:text\) recursively:
m <-numeric(n); C <-numeric(n)m[1] <-0; C[1] <-1V <-0.5^2; W <-0.2^2for (t in2:n) { a <- m[t-1] R <- C[t-1] + W A <- R / (R + V) m[t] <- a + A * (y[t] - a) C[t] <- (1- A) * R}plot.ts(cbind(y, m), col=c("black","red"), lwd=2,main="Kalman Filter Estimate of Latent State", ylab="")legend("topleft", legend=c("Observed y","Filtered mean m_t"),col=c("black","red"), lwd=2, bty="n")
The red line tracks the smoothed latent process inferred from noisy data.
12.2.5 1.5 Forecasting and Uncertainty
Predictive distribution for the next observation: \[
y_{t+1} \mid y_{1:t} \sim N(F_{t+1}^\top a_{t+1}, F_{t+1}^\top R_{t+1} F_{t+1} + V_{t+1}).
\]
Forecast variance increases as the state uncertainty grows over time.
Probabilistic forecasting with credible intervals.
Online updating suitable for real-time applications.
12.3 Lecture 2 — State-Space Models and Bayesian Filtering
12.3.1 2.1 General State-Space Models
General form: \[
x_t = f(x_{t-1}) + \omega_t, \qquad y_t = g(x_t) + \nu_t,
\] where \(begin:math:text\) f \(end:math:text\) and \(begin:math:text\) g \(end:math:text\) may be nonlinear or non-Gaussian.
Examples: stochastic volatility, epidemic dynamics, tracking models.