Bayesian inference and data assimilation
Professor Yoonsang Lee at Dartmouth college came to KAIST and gave two lectures on Bayesian inference and data assimilation. This post is a brief summary.
Unfortunately, I could not take the first lecture because I had to attend a class as a teaching assistant.
The second lecture was a 2 hours long overview of Kalman filters. If we have a prediction model
and an observation model
then Kalman filters correct the prediction \(u_k\) by using the observation \(v_k\) based on Kalman update rule (basically variance minimization, or weighted average based on precision).
The vanilla Kalman filter has two critical assumptions.
- The prediction and observation models are linear.
- The prior distribution of \(u_k\) is Gaussian.
Extended Kalman filter (EKF) extends the first issue, by linearizing nonlinear models and applying the vanilla Kalman filter. Ensemble and particle filters are for the second case.
I had taken Professor Seung-Hyun Kong's course on Kalman filters (MO560 at KAIST) which covered almost everything mentioned above.
These were "classics." Nowadays, there has been research to incorporate neural networks into this framework. One of the most straightforward example is considering the following prediction model,
where \(f_\theta\) is a neural network. Of course, two things are necessary to find \(\theta\):
- some sophisticatedely designed loss function
- the entire model should be written in a (automatic) differentiable framework such as JAX.
This is called "differentiable Kalman filter."