I created a simulation that uses a Kalman filter to track the position of a car driving around a race track. I wrote the code in Python 3.7 and created the animation using PyGame and public domain art assets (see Acknowlegements section). A requirements.txt file is included. The Python files of interest are:
- KalmanFilter is a general-purpose implementation of a Kalman filter.
- RaceTrack contains three classes that create the race track, move the car, and draw the Kalman filter estimates and estimate uncertainty.
- KalmanCar contains the main program loop that instantiates the KalmanFilter and RaceTrack classes and runs the animation.
The following sections give some brief background on the Kalman filter itself, as well as discuss how I implemented the filter and how tuning the simulation parameters impacts performance.
Note: GitHub Flavored Markdown does not currently support LaTeX equations, please refer instead to readme.pdf contained in this repo.
The discrete Kalman filter is composed of five equations split into two groups: the time update or "prediction" equations and the measurement update or "correction" equations.
$$
\text{\underline{Time update equations (predict):}} \[0.3em]
\begin{aligned}
\bold{\hat{x}k^-} &= \bold{A \hat{x}{k-1} + B u_{k-1}} \hspace{2em}&(1) \
\bold{P_k^-} &=\bold{ A P_{k-1} A^T + Q} \hspace{2em}&(2)
\end{aligned} \[-1.5em]
$$
$$
\text{\underline{Measurement update equations (correct):}} \[0.3em]
\begin{aligned}
\bold{K_k} &= \bold{P_k^- H^T (H P_k^- H^T + R)^{-1}} \hspace{2em}&(3) \
\bold{\hat{x}_k} &= \bold{\hat{x}_k^- + K_k (z_k - H \hat{x}_k^-)} \hspace{2em}&(4) \
\bold{P_k} &= \bold{(I - K_k H) P_k^-} \hspace{2em}&(5)
\end{aligned}
$$
The notation shown here is from professors Greg Welch and Gary Bishop's An Introduction to the Kalman Filter. The Kalman filter seeks to estimate the state of a process at time step
The operation of the Kalman filter is split into two parts. First, the time update or "prediction" equations that provide a priori estimates of the state and estimate uncertainty. The minus superscript seen in
After a measurement is made, the second part of the filter operation is the computation of the measurement update or "correction" equations. The Kalman gain
graph TD
A((Start)) --> B[Initial Guesses]
B -- x0, P0 --> C[Predict]
D[Measurements] -- z_k, R_k --> E[Correct]
C -- x_k^minus, P_k^minus --> E
E -- x_k, P_k --> F[Time Step k=k+1]
F -- x_k-1, P_k-1 --> C
The Kalman gain
The standard Kalman filter makes the following assumptions:
- The process and measurement errors are zero-mean and normally distributed (Gaussian distribution).
- The process and measurement errors are independent of each other.
- The measurement errors are mutually independent.
- The process being estimated is a linear dynamical system that can be modeled with a state-space representation.
Other variations of the Kalman filter such as the Extended or Unscented Kalman filter may be more appropriate if these assumptions are violated.
The time update prediction portion of the Kalman filter encompasses equations
The KalmanCar simulation attempts to estimate the position of the car along the race track. The states to be estimated by the filter are the car's x-coordinate, y-coordinate, velocity in the x-direction, and velocity in the y-direction. The car presumably has a driver who manipulates the controls of the vehicle by stepping on the accelerator or the brake pedal and by turning the steering wheel. However, for this simulation the car is assumed to be a black box where the control inputs are not known to us. Because of this the control-input model matrix KalmanCar process model is simple: the new x-position estimate is simply the previous x-position estimate plus the previous x-velocity estimate. The new x-velocity estimate is simply the previous x-velocity estimate. And the same is true for the y-position and y-velocity. Note that this means we are assuming zero acceleration of the car such that the velocity is constant. In fact, this is an erroneous assumption as the car decelerates coming into bends in the track and then accelerates to top speed once back on a straight-away. However, this model is close enough to get good results as will be shown in a following section. Equation KalmanCar's x and y position at time step $k$, and $\hat{\dot{x}}k^-$ and $\hat{\dot{y}}k^-$ are the a priori estimates of the KalmanCar's x and y velocity at time step $k$. Notice that the x-position estimate scalar is written as $\hat x_k^-$, while the state estimate vector is represented by $\bold{\hat x_k^-}$ in bold. It's unfortunate that the state estimate vector uses $x$ for its notation. This is potentially confusing, but all vectors and matrices are displayed in bold to distinguish them from scalars. Multiplying these matrices results in four equations:
$$
\begin{aligned}
\hat{x}k^- &= \hat{x}{k-1} + \hat{\dot{x}}{k-1} \
\hat{y}k^- &= \hat{y}{k-1} + \hat{\dot{y}}{k-1} \
\hat{\dot{x}}k^- &= \hat{\dot{x}}{k-1} \
\hat{\dot{y}}k^- &= \hat{\dot{y}}{k-1}
\end{aligned}
$$
These four equations represent the state transition process of the KalmanCar's position and velocity exactly as described in writing above. For the Kalman filter's first iteration an initial guess is supplied for $\bold{\hat{x}{k-1}}$, after which the state estimate is recursively calculated as illustrated by the flow chart. Note that
The estimate uncertainty KalmanCar begins at the finish line and so its initial position and velocity are known with a high degree of certainty.
The process noise covariance matrix KalmanCar simulation it is time invariant and so does not have a KalmanCar's state. As discussed previously, we know that the state model incorrectly assumes constant velocity and so the process noise must be greater than zero. Its optimal value can be "tuned" through a process of trial and error.
KalmanCar simulation. The Kalman car decelerates coming into bends and accelerates coming out of bends such that the average acceleration might indeed be close to zero. The acceleration of the Kalman car is a constant value specified when initializing the Car() class, and so it is not normally distributed. However, I thought it would be an interesting exercise to compare this method of defining KalmanCar simulation. KalmanCar code.
The measurement update occurs after the measurement(s) take place, and encompasses equations
As mentioned previously, the Kalman gain equation
In equation KalmanCar simulation the measurement certainty does vary over time and should properly be written as
In the previous section the observation model matrix KalmanCar simulation there are
The measurements contained in the KalmanCar are measured by a GPS satellite, and the quality of the measurement depends on whether the satellite has an unobstructed view of the KalmanCar. The x and y velocity of the KalmanCar are derived from an inertial navigation system (INS) that measures the speed and rotation of the KalmanCar. The quality of this dead reckoning approach to navigation deteriorates as errors accumulate over time. The Kalman filter thus acts as a sensor fusion algorithm, combining the measurements of both the GPS satellite and the INS to provide lower measurement uncertainty than either measurement source alone.
The satellite's position measurement errors and the INS's velocity measurement errors have separate standard deviations. In addition, the KalmanCar simulation has a second satellite measurement error standard deviation that is used whenever the satellite's line of sight to the KalmanCar is obstructed by trees. The
Once the measurements have been made and the Kalman gain has been calculated, the state estimate can be updated per equation
The final Kalman filter equation updates the estimate uncertainty
In equation
Assume that the initial estimate uncertainty is small:
$$\bold{P_{k-1}} =
\begin{bmatrix}
1 & 0 & 0 & 0 \
0 & 1 & 0 & 0 \
0 & 0 & 0.1 & 0 \
0 & 0 & 0 & 0.1
\end{bmatrix}
$$
And
In the absence of reliable measurements the Kalman filter will rely on its internal process model. The uncertainty will then grow over time without a measurement to serve as a correction; the correction step is effectively doing nothing. However, the estimate uncertainties will not increase indefinitely. There is an upper bound on estimate uncertainty in the form of equation
The first sanity check is to run the KalmanCar simulation with noiseless velocity measurements. To do this, I set the GPS satellite measurement error standard deviation to a very high value (1000) and the process noise variance of diagonal matrix
Even with a flawed process model that assumes constant velocity, the Kalman filter estimates (blue line) track the KalmanCar's path (red line) almost perfectly. There is no green estimate uncertainty bubble shown near the KalmanCar because the uncertainty of the INS measurements are zero, hence the estimate's uncertainty is also nearly zero (with a small process error
The plot of the Kalman filter's x and y position estimate uncertainties show that they are linearly increasing over time, but the magnitude of the uncertainty is tiny. This is because the satellite measurements are being ignored due to their high uncertainty, and so the position estimate error is simply the repeated addition of the process uncertainty
In the real world the KalmanCar's velocity measurements will not be noiseless. Leaving all other parameters the same, I changed the standard deviation of the velocity measurement error to 2 pixels/update:
The Kalman filter estimates initially match up well with the KalmanCar's path, but eventually the measurement errors accumulate and the Kalman estimates go completely off-course. Notice also the large size of the green estimate uncertainty bubble. The bubble started small, but grew progressively larger as the simulation continued and the velocity measurement errors accumulated. This can be seen explicitly from the plot of
However, this is not a fair evaluation of the dead reckoning performance. The process noise is still very small relative to the uncertainty of the velocity measurements. The velocity measurements are then being almost entirely ignored in favor of the a priori velocity estimates. This can be seen in the uncertainty plot above as the velocity uncertainty being nearly zero (relying completely on a priori estimates). I found that tuning the variance of
There is still a fair amount of drift in the estimates, but they now follow the path of the race track much more closely.
The other extreme would be to rely solely on GPS navigation to estimate the position of the KalmanCar. The position measurement error standard deviation is set to 20 pixels, but the position sigma when obstructed by trees is left at 1000 pixels. The velocity measurement error standard deviation is also set to 1000 pixels.
The Kalman estimate is fairly good at the beginning of the race track. However, once the KalmanCar reaches the first group of trees the trouble begins. The trees obstruct the GPS satellite's line of sight, and now both the position and velocity measurements are highly uncertain. The position uncertainty of the KalmanCar is visually represented by the diameter of the green circle expanding as the KalmanCar enters the trees. Luckily, this first group of trees is along a straight-away and the KalmanCar zooms by them fairly quickly. The next two groups of trees are along bends in the road, and the KalmanCar must slow down and spend more time in these areas. The Kalman estimates in these areas do not follow the shape of the race track at all, and the blue path "snaps back" to the roadway when the KalmanCar exits the trees. The estimate uncertainty in these obstructed areas can be seen in the three sharp peaks in the estimate uncertainty plot:
The best performance is achieved when using all information available to the Kalman filter. The measurement standard deviation of the satellite is set to 20, with the standard deviation when obstructed by trees left at 1000. The velocity measurement standard deviation is set to 2, and the diagonal variances of
The simulation can also be run with the GPS measurements drawn on the screen as magenta crosses. The crosses closely cluster around the KalmanCar as it drives down the track until it enters the trees, at which point the measurements around the KalmanCar are very sparse. However, the green estimate uncertainty circle does not expand nearly as much as it did in the GPS-only case. The INS helps keep the estimates fairly close to the race track even in the absence of GPS measurements.
This can be seen in the uncertainty plot above. The three peaks corresponding to the three groups of trees still exist, but the maximum variance of the position uncertainties are now about 1000, while in the GPS-only case the peaks spiked up to around 25,000. This is a standard deviation of about 32 pixels with the sensor fusion versus 158 pixels in the GPS-only case - a significant improvement!
So far the Kalman filter's assumed measurement variances have been set equal to the true/actual variance of the measurement sources. In practice, this may not be true. These assumed measurement variances can be adjusted independently from the true measurement variances in the KalmanCar code to see the impact on filter performance.
The process noise covariance
The Kalman filter is computationally simple, but there is a lot going on under the hood. Working on this KalmanCar project has greatly improved my understanding of how the Kalman filter works, and it's gratifying to see it all come together in an easy-to-comprehend visual form.
Thanks to professors Greg Welch and Gary Bishop at the University of North Carolina at Chapel Hill for making their lecture material on the Kalman filter available on their university website. I referenced their Introduction to the Kalman Filter for the notation of the Kalman filter equations, and for their one-dimensional example of estimating a constant voltage. I also benefited from their course pack which has a few introductory sections on statistics as well as sections on more advanced topics in addition to the Kalman filter material. The material in the Kalman introduction seems to be a more recently updated version of the Kalman filter section in the course pack, and has corrected the subscript in equation
Thanks also to Alex Becker's great Kalman filter tutorial. The terminology used in this document is heavily based off that used in Becker's tutorial and in the Wikipedia Kalman filter page.
Finally, thanks to Kenney Vleugels for his race track game assets that he has made available under the CC0 1.0 universal public domain license. See the license.txt contained in the /img directory.









