Parametric Dynamic Mode Decomposition¶
In this tutorial we explore the usage of the class pydmd.ParametricDMD, presented in [arXiv:2110.09155]. The approach provides an extension Dynamic Mode Decomposition to parametric problems, in order to obtain predictions for future time instants in untested parameters.
Parametric Dynamic Mode Decomposition¶
In this tutorial we explore the usage of the class pydmd.ParametricDMD, presented in A Dynamic Mode Decomposition Extension for the Forecasting of Parametric Dynamical Systems by Andreuzzi et all (doi ). The approach provides an extension Dynamic Mode Decomposition to parametric problems, in order to obtain predictions for future time instants in untested parameters.
We'll examine a simple parametric time-dependent problem, the sum of two complex period functions: $$\begin{cases} f_1(x,t) &:= e^{2.3i*t} \cosh(x+3)^{-1}\\ f_2(x,t) &:= 2 * e^{2.8j*t} \tanh(x) \cosh(x)^{-1}\\ f^{\mu}(x,t) &:= \mu f_1(x,t) + (1-\mu) f_2(x,t), \qquad \mu \in [0,1] \end{cases}$$
-Modules¶
First of all we import the modules which we'll use throughout the tutorial:
+Modules¶
First of all we import the modules which we'll use throughout the tutorial:
- In addition to
pydmd.ParametricDMDwe import the classpydmd.DMD, we'll present later how it is used; - The classes
PODandRBFfromezyrb, which are used respectively to reduce the dimensionality before the interpolation and to perform the interpolation; NumPyandMatplotlib.
import warnings
+
+import warnings
-warnings.filterwarnings("ignore")
+warnings.filterwarnings("ignore")
from pydmd import ParametricDMD, DMD, HankelDMD
from ezyrb import POD, RBF
@@ -15145,44 +7550,41 @@ Modules¶
import matplotlib.pyplot as plt
import matplotlib.colors as colors
-
-
First of all we define several functions to construct our system and gather the data needed to train the algorithm:
-def f1(x, t):
+
+def f1(x, t):
return 1.0 / np.cosh(x + 3) * np.exp(2.3j * t)
@@ -15193,33 +7595,30 @@ Functions¶<
def f(mu, x, t):
return mu * f1(x, t) + (1 - mu) * f2(x, t)
-
-
Training dataset¶
We prepare a discrete space-time grid with an acceptable number of sample points in both the dimensions, which we'll use later on to generate our training dataset:
- +Training dataset¶
We prepare a discrete space-time grid with an acceptable number of sample points in both the dimensions, which we'll use later on to generate our training dataset:
n_space = 500
+
+n_space = 500
n_time = 160
x = np.linspace(-5, 5, n_space)
@@ -15227,161 +7626,129 @@ Training datasetxgrid, tgrid = np.meshgrid(x, t)
-
-
The training dataset results from applying the function f defined above for several known parameters. We select 10 equispaced parameters in the interval [0,1]. Our parameter is 1-dimensional, but Parametric DMD works also with parameters living in multi-dimensional spaces.
training_params = np.round(np.linspace(0, 1, 10), 1)
+
+training_params = np.round(np.linspace(0, 1, 10), 1)
plt.figure(figsize=(8, 2))
-plt.scatter(training_params, np.zeros(len(training_params)), label="training")
-plt.title("Training parameters")
+plt.scatter(training_params, np.zeros(len(training_params)), label="training")
+plt.title("Training parameters")
plt.grid()
-plt.xlabel("$\mu$")
+plt.xlabel("$\mu$")
plt.yticks([], []);
-
-
It's critical to provide a sufficient number of training parameters, otherwise the algorithm won't be able to explore the solution manifold in an acceptable way.
The training dataset results from the application of f to the combination of xgrid, tgrid and the parameters in training_params:
training_snapshots = np.stack(
+
+training_snapshots = np.stack(
[f(x=xgrid, t=tgrid, mu=p).T for p in training_params]
)
print(training_snapshots.shape)
-
-
(10, 500, 160)
As you can see the shape of the training dataset follows the convention: $$n_{train} \times n_{space} \times n_{time-instants}$$
-Utility functions¶
We define a few utiliy functions to ease the explanation in the following paragraphs, you can ignore safely the following code if you'd like.
- +Utility functions¶
We define a few utiliy functions to ease the explanation in the following paragraphs, you can ignore safely the following code if you'd like.
def title(param):
- return "$\mu$={}".format(param)
+
+def title(param):
+ return "$\mu$={}".format(param)
def visualize(X, param, ax, log=False, labels_func=None):
@@ -15410,10 +7777,10 @@ Utility functionsdef labels_func_default(ax):
ax.set_yticks([0, n_time // 2, n_time])
- ax.set_yticklabels(["0", "$\pi$", "2$\pi$"])
+ ax.set_yticklabels(["0", "$\pi$", "2$\pi$"])
ax.set_xticks([0, n_space // 2, n_space])
- ax.set_xticklabels(["-5", "0", "5"])
+ ax.set_xticklabels(["-5", "0", "5"])
labels_func = labels_func_default
@@ -15425,205 +7792,175 @@ Utility functionsfig.colorbar(im, ax=axes)
plt.show()
-
-
We can use the functions defined in the last code block to visualize our data for some training parameters:
-idxes = [0, 2, 4, 6, 8]
+
+idxes = [0, 2, 4, 6, 8]
visualize_multiple(training_snapshots[idxes], training_params[idxes])
-
-
Monolithic or partitioned¶
Parametric DMD comes in two different "flavors", namely monolithic and partitioned approach. Refer to the paper linked above for more theoretical details. We showcase how to use both of them to tackle our toy problem.
-Monolithic variant¶
You get a monolithic instance of the class ParametricDMD by using the following constructor:
Monolithic or partitioned¶
Parametric DMD comes in two different "flavors", namely monolithic and partitioned approach. Refer to the paper linked above for more theoretical details. We showcase how to use both of them to tackle our toy problem.
+Monolithic variant¶
You get a monolithic instance of the class ParametricDMD by using the following constructor:
ParametricDMD(dmd, rom, interpolator)
where dmd is an instance of some DMD variant provided by PyDMD, rom is the object used to compute the reduced order model of the dataset (usually we use ezyrb.POD, but different ROMs are under experimentation), and interpolator is a multi-dimensional interpolator whose interface provides the method fit() and predict(). You're generally good to go if you use interpolator from EZyRB, since they expose the appropriate interface.
dmd = DMD(svd_rank=-1)
+
+dmd = DMD(svd_rank=-1)
rom = POD(rank=20)
interpolator = RBF()
pdmd_monolithic = ParametricDMD(dmd, rom, interpolator)
-
-
Partitioned variant¶
You get a partitioned instance instead by using the following constructor:
+Partitioned variant¶
You get a partitioned instance instead by using the following constructor:
ParametricDMD([dmds], rom, interpolator)
which is very similar to the one shown above, except for [dmds]: in the partitioned approach you pass in a list of DMDs, one for each training parameter. This gives a little bit more flexibility (you can use special variants for noisy/turbulent parameters, for instance), at the expense of an augmented model complexity.
Notice that the partitioned variant is not a generalization of the monolithic variant, since there's no way to get a monolithic training on a partitioned instance. Refer to the paper for the theoretical details.
-dmds = [DMD(svd_rank=-1) for _ in range(len(training_params))]
+
+dmds = [DMD(svd_rank=-1) for _ in range(len(training_params))]
pdmd_partitioned = ParametricDMD(dmds, rom, interpolator)
-
-
ROM rank¶
The ROM rank parameter represents in this case the dimensionality of the reduced space where our parametric time-dependent snapshots are mapped. The larger the dimensionality, the less lossy the ROM direct and inverse application will be. However, the larger the dimensionality of the ROM, the larger the interpolation error will be. You should find the appropriate balance for your use case.
- +ROM rank¶
The ROM rank parameter represents in this case the dimensionality of the reduced space where our parametric time-dependent snapshots are mapped. The larger the dimensionality, the less lossy the ROM direct and inverse application will be. However, the larger the dimensionality of the ROM, the larger the interpolation error will be. You should find the appropriate balance for your use case.
pdmd_monolithic.fit(
+
+pdmd_monolithic.fit(
training_snapshots, training_params
) # same for pdmd_partitioned
-
-
Unseen parameters¶
Choosing testing parameters¶
We select some unknown (or testing) parameters in order to assess the results obtained using the parametric approach. We take testing parameters at dishomogeneous distances from our training parameters, which results in varying degrees of accuracy. This is pretty much what the following snippet does, you can just jump to the plot below to see the arrangement on the real line of the testing parameters:
- +Unseen parameters¶
Choosing testing parameters¶
We select some unknown (or testing) parameters in order to assess the results obtained using the parametric approach. We take testing parameters at dishomogeneous distances from our training parameters, which results in varying degrees of accuracy. This is pretty much what the following snippet does, you can just jump to the plot below to see the arrangement on the real line of the testing parameters:
similar_testing_params = [1, 3, 5, 7, 9]
+
+similar_testing_params = [1, 3, 5, 7, 9]
testing_params = training_params[similar_testing_params] + np.array(
[5 * pow(10, -i) for i in range(2, 7)]
)
testing_params_labels = [
str(training_params[similar_testing_params][i - 2])
- + "+$5*10^{{-{}}}$".format(i)
+ + "+$5*10^{{-{}}}$".format(i)
for i in range(2, 7)
]
@@ -15640,163 +7977,132 @@ Unseen parameters[f(mu=p, x=xgrid2, t=tgrid2).T for p in testing_params]
)
-
-
We now visualize the training parameters with respect to the testing parameters which we've just selected:
-plt.figure(figsize=(8, 2))
-plt.scatter(training_params, np.zeros(len(training_params)), label="Training")
-plt.scatter(testing_params, np.zeros(len(testing_params)), label="Testing")
+
+plt.figure(figsize=(8, 2))
+plt.scatter(training_params, np.zeros(len(training_params)), label="Training")
+plt.scatter(testing_params, np.zeros(len(testing_params)), label="Testing")
plt.legend()
plt.grid()
-plt.title("Training vs testing parameters")
-plt.xlabel("$\mu$")
+plt.title("Training vs testing parameters")
+plt.xlabel("$\mu$")
plt.yticks([], []);
-
-
Notice that in our case we had the freedom to take whathever parameter we wanted to showcase our method. In practical (or less theoretical) application you will probably have fixed unknown parameters which you're interested to use.
-Instructing ParametricDMD on which parameter it should interpolate¶
We can now set the testing parameters by setting the propery parameters of our instance of ParametricDMD
Instructing ParametricDMD on which parameter it should interpolate¶
We can now set the testing parameters by setting the propery parameters of our instance of ParametricDMD
pdmd_monolithic.parameters = testing_params # same for pdmd_partitioned
+
+pdmd_monolithic.parameters = testing_params # same for pdmd_partitioned
-
-
We also show that we can predict future values out of the time window provided during the training:
-pdmd_monolithic.dmd_time["t0"] = (
- pdmd_monolithic.original_time["tend"] - N_nonpredict + 1
+
+pdmd_monolithic.dmd_time["t0"] = (
+ pdmd_monolithic.original_time["tend"] - N_nonpredict + 1
)
-pdmd_monolithic.dmd_time["tend"] = (
- pdmd_monolithic.original_time["tend"] + N_nonpredict
+pdmd_monolithic.dmd_time["tend"] = (
+ pdmd_monolithic.original_time["tend"] + N_nonpredict
)
print(
- f"ParametricDMD will compute {len(pdmd_monolithic.dmd_timesteps)} timesteps:",
+ f"ParametricDMD will compute {len(pdmd_monolithic.dmd_timesteps)} timesteps:",
pdmd_monolithic.dmd_timesteps * time_step,
)
-
-
ParametricDMD will compute 80 timesteps: [ 9.48405329 9.56308707 9.64212085 9.72115463 9.8001884 9.87922218 9.95825596 10.03728974 10.11632351 10.19535729 10.27439107 10.35342485 10.43245862 10.5114924 10.59052618 10.66955996 10.74859373 10.82762751 @@ -15814,109 +8120,91 @@Inst
result = pdmd_monolithic.reconstructed_data
+
+result = pdmd_monolithic.reconstructed_data
result.shape
-
-
(5, 500, 80)
# this is needed to visualize the time/space in the appropriate way
+
+# this is needed to visualize the time/space in the appropriate way
def labels_func(ax):
l = len(pdmd_monolithic.dmd_timesteps)
ax.set_yticks([0, l // 2, l])
- ax.set_yticklabels(["3\pi", "4$\pi$", "5$\pi$"])
+ ax.set_yticklabels(["3\pi", "4$\pi$", "5$\pi$"])
ax.set_xticks([0, n_space // 2, n_space])
- ax.set_xticklabels(["-5", "0", "5"])
+ ax.set_xticklabels(["-5", "0", "5"])
-print("Approximation")
+print("Approximation")
visualize_multiple(
result,
testing_params_labels,
figsize=(20, 2.5),
labels_func=labels_func,
)
-print("Truth")
+print("Truth")
visualize_multiple(
- result,
+ testing_snapshots,
testing_params_labels,
figsize=(20, 2.5),
labels_func=labels_func,
)
-print("Absolute error")
+print("Absolute error")
visualize_multiple(
np.abs(testing_snapshots.real - result.real),
testing_params_labels,
@@ -15924,199 +8212,130 @@ Results analysislabels_func=labels_func,
)
-
-
Approximation
Truth
Absolute error
Special cases¶
In this section we briefly take into account how to deal with special cases with ParametricDMD. Do not hesitate to open a discussion on our GitHub Discussion page in case you need further indications.
Multi-dimensional parameters¶
In case your parameters are multi-dimensional, the programming interface does not change. Here we provide the full simplified training workflow:
- +Special cases¶
In this section we briefly take into account how to deal with special cases with ParametricDMD. Do not hesitate to open a discussion on our GitHub Discussion page in case you need further indications.
Multi-dimensional parameters¶
In case your parameters are multi-dimensional, the programming interface does not change. Here we provide the full simplified training workflow:
training_params_2d = np.hstack(
+
+training_params_2d = np.hstack(
(training_params[:, None], np.random.rand(len(training_params))[:, None])
)
plt.grid()
-plt.title("Multidimensional training parameters")
+plt.title("Multidimensional training parameters")
plt.scatter(training_params_2d[:, 0], training_params_2d[:, 1]);
-
-
pdmd = ParametricDMD(dmd, rom, interpolator)
+
+pdmd = ParametricDMD(dmd, rom, interpolator)
pdmd.fit(training_snapshots, training_params_2d)
pdmd.parameters = training_params_2d + np.random.rand(*training_params_2d.shape)
result = pdmd.reconstructed_data
-
-
Multi-dimensional snapshots¶
Dealing with multi-dimensional snapshots requires some pre/post-processing on the user. Basically you need to flatten your snapshots before passing them to ParametricDMD.fit(), such that they become 1-dimensional. The goal shape is:
+
Multi-dimensional snapshots¶
Dealing with multi-dimensional snapshots requires some pre/post-processing on the user. Basically you need to flatten your snapshots before passing them to ParametricDMD.fit(), such that they become 1-dimensional. The goal shape is:
$$n_{train} \times (n^1_{space} \times n^2_{space} \times \dots \times n^k_{space}) \times n_{time-instants}$$
Be careful not to mix spatial and time dependency of your snapshots. After you get your results, you should revert the flattening to obtain the original spatial shape of your snapshots.
-Fitting a DMD instance
-
+
Fitting a DMD instance
-
+
Fitting a DMD instance
-
+
Results¶
Tim
Results¶
Tim
Results¶
Tim
Eigenvalue (0.6885010063917993+0.5616838806217154j): distance from unit circle 0.11145 -Eigenvalue (0.6885010063917993-0.5616838806217154j): distance from unit circle 0.11145 -Eigenvalue (0.5999835133398024+0.48488642225476797j): distance from unit circle 0.22858 -Eigenvalue (0.5999835133398024-0.48488642225476797j): distance from unit circle 0.22858 -Eigenvalue (0.6381679614101425+0.5136174573288329j): distance from unit circle 0.18082 -Eigenvalue (0.6381679614101425-0.5136174573288329j): distance from unit circle 0.18082 +Eigenvalue (0.6171036225560784+0.5563902338063536j): distance from unit circle 0.16910 +Eigenvalue (0.6171036225560784-0.5563902338063536j): distance from unit circle 0.16910 +Eigenvalue (0.6972982003681297+0.5158330617893364j): distance from unit circle 0.13264 +Eigenvalue (0.6972982003681297-0.5158330617893364j): distance from unit circle 0.13264 +Eigenvalue (0.6586161974627536+0.47740250643152143j): distance from unit circle 0.18656 +Eigenvalue (0.6586161974627536-0.47740250643152143j): distance from unit circle 0.18656
Results¶
Tim
Results¶
Tim
Tutorial 14: Demonstrating the optDMD and BOP-DMD¶
In this tutorial we go over the Bagging-Optimized Dynamic Mode Decomposition (BOP-DMD) and Optimized Dynamic Mode Decomposition (optDMD) methods. The tutorial is set up exactly as in Tutorial 2 to provide a direct comparison between methods.
+Tutorial 14: Demonstrating the optDMD and BOP-DMD¶
In this tutorial we go over the Bagging-Optimized Dynamic Mode Decomposition (BOP-DMD) and Optimized Dynamic Mode Decomposition (optDMD) methods. The tutorial is set up exactly as in Tutorial 2 to provide a direct comparison between methods.
Note, that there is a namespace conflict with optimized DMD. The optDMD we refer to in this tutorial is the one by Ashkham and Kutz (2018). We differentiate the optDMD of Ashkham and Kutz from the exact DMD with the optimal closed-form solution from Heas and Herzet (2016).
The optDMD and BOP-DMD are effectively the same method, but with the BOP-DMD implementing a statistical bagging of optDMD fits and aggregating the results.
-
@@ -14622,14 +7541,14 @@
- The additional ability to provide uncertainty estimates in the solutions, including uncertainty in the spatial modes, eigenvalues, and amplitudes
- More robustly fitting noisy data. -
- The total least-squares DMD with optimal amplitudes.
- The optimized DMD from Askham and Kutz (2018; note the namespace conflict with Heas and Herzet's (2016) optimal solution!)
- The BOP-DMD
optimized Dynamic Mode Decomposition (optDMD): Askham, T., & Kutz, J. N. (2018). Variable projection methods for an optimized dynamic mode decomposition. SIAM Journal on Applied Dynamical Systems, 17(1), 380–416. https://doi.org/10.1137/M1124176
Bagging, Optimized Dynamic Mode Decomposition (BOP-DMD): Sashidhar, D., & Kutz, J. N. (2022). Bagging, optimized dynamic mode decomposition for robust, stable forecasting with spatial and temporal uncertainty quantification. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 380(2229). https://doi.org/10.1098/rsta.2021.0199
@@ -14637,74 +7556,66 @@Citations¶<
optimal closed-form solution to the exact DMD: Héas, P., & Herzet, C. (2022). Low-Rank Dynamic Mode Decomposition: An Exact and Tractable Solution. Journal of Nonlinear Science, 32(1). https://doi.org/10.1007/s00332-021-09770-w
Tutorial 14: Demonstr
Methods compared¶
Three methods are compared:
+Methods compared¶
Three methods are compared:
Method (1) comes directly from Tutorial 2.
-Citations¶
-
+
Citations¶
import copy
+
+import copy
import numpy as np
import matplotlib.pyplot as plt
from pydmd import DMD
from pydmd.bopdmd import BOPDMD
-
-
def relative_error(x_est, x_true):
- """Helper function for calculating the relative error.
-
- """
+
+def relative_error(x_est, x_true):
+ """Helper function for calculating the relative error."""
return np.linalg.norm(x_est - x_true) / np.linalg.norm(x_true)
-
-
Setting up the toy data¶
The toy data is exactly the same as in tutorial 2 to provide a direct comparison. The example is a single two-dimensional spatial mode modified by a temporal oscillation with period of 4 that decays towards zero amplitude with time.
+Setting up the toy data¶
The toy data is exactly the same as in tutorial 2 to provide a direct comparison. The example is a single two-dimensional spatial mode modified by a temporal oscillation with period of 4 that decays towards zero amplitude with time.
Two versions of the data are created as a convenience: a 3d array of 2d spatial snapshots along a time dimension and a 2d array where the spatial dimensions have been flattened down. This is necessary because PyDMD classes expect data of the form len(samples) x len(time).
The data are subdivided into training data and forecast data, as this represents real world use cases for fitting and forecasting.
-x1 = np.linspace(-3, 3, 80)
+
+x1 = np.linspace(-3, 3, 80)
x2 = np.linspace(-3, 3, 80)
x1grid, x2grid = np.meshgrid(x1, x2)
@@ -14716,18 +7627,24 @@ Setting up the toy datatime_forecast = np.arange(6 + 0.4, 18 + 0.4, 0.4)
time = np.arange(0, 6 + 0.4, 0.4)
num_time_samples = len(time)
-true_eigenvalues = (-.115 - 1j) * np.pi / 2
+true_eigenvalues = (-0.115 - 1j) * np.pi / 2
-spatial_modes = 2 / np.cosh(x1grid) / np.cosh(x2grid)
+spatial_modes = 2 / np.cosh(x1grid) / np.cosh(x2grid)
data_clean = spatial_modes[:, :, np.newaxis] * np.exp(true_eigenvalues * time)
-data_forecast = spatial_modes[:, :, np.newaxis] * np.exp(true_eigenvalues * time_forecast)
+data_forecast = spatial_modes[:, :, np.newaxis] * np.exp(
+ true_eigenvalues * time_forecast
+)
rng = np.random.default_rng(seed)
-noise = rng.normal(0, sigma, (x1grid.shape[0], x1grid.shape[1], num_time_samples))
+noise = rng.normal(
+ 0, sigma, (x1grid.shape[0], x1grid.shape[1], num_time_samples)
+)
snapshots_2d = data_clean + noise
# PyDMD expects the data to be 2-dimensional, with shape len(space) x len(time)
-snapshots_1d = snapshots_2d.reshape(x1grid.shape[0] * x1grid.shape[1], num_time_samples)
+snapshots_1d = snapshots_2d.reshape(
+ x1grid.shape[0] * x1grid.shape[1], num_time_samples
+)
fig, axes = plt.subplots(4, 4, figsize=(16, 10), sharex=True, sharey=True)
for ntime, _ in enumerate(time):
@@ -14737,61 +7654,44 @@ Setting up the toy dataplt.show()
-
-
Fitting and forecasting¶
All methods are fit through the same syntax, but vary in how the forecasted period is represented.
+Fitting and forecasting¶
All methods are fit through the same syntax, but vary in how the forecasted period is represented.
To reiterate, PyDMD objects expect data of the shape len(space) x len(time), so we have to use the data in which the spatial dimensions collapsed down to a single dimension. After fitting the training data, the data are then forecasted to the forecast period.
-optdmd = BOPDMD(svd_rank=1, num_trials=0)
-bopdmd = BOPDMD(svd_rank=1, num_trials=100)
+
+optdmd = BOPDMD(svd_rank=1, num_trials=0)
+bopdmd = BOPDMD(svd_rank=1, num_trials=100, varpro_opts_dict={"tol": 0.0115})
dmd = DMD(svd_rank=1, tlsq_rank=2, exact=True, opt=True)
optdmd.fit(snapshots_1d, time)
@@ -14808,22 +7708,34 @@ Fitting and forecasting# It is necessary to deepcopy the dmd object when generating
# the dmd forecast to avoid overwriting the reconstruction.
dmd_forecast = copy.deepcopy(dmd)
-dmd_forecast.dmd_time['t0'] = dmd.dmd_time['tend'] + dmd.dmd_time['dt']
-dmd_forecast.dmd_time['tend'] *= 3
+dmd_forecast.dmd_time["t0"] = dmd.dmd_time["tend"] + dmd.dmd_time["dt"]
+dmd_forecast.dmd_time["tend"] *= 3
-
-
/home/runner/work/PyDMD/PyDMD/pydmd/bopdmd.py:751: UserWarning: Initial trial of Optimized DMD failed to converge. Consider re-adjusting your variable projection parameters with the varpro_opts_dict and consider setting verbose=True. + warnings.warn(msg) ++
The optDMD and BOP-DMD are rooted in the same class. For this generalized class:
- The number of trials,
num_trials, specifies the number of ensemble members to use in the bagging step of the BOP-DMD.
@@ -14832,306 +7744,268 @@
Fitting and forecasting
For the DMD we follow Tutorial 2 to provide a direct comparison between methods.
Finally, the optDMD/BOP-DMD methods do not need to have evenly spaced data. e.g., randomly chosen snapshots could be dropped (for example representing a case where observational instruments fail) and the data could still be fit.
-
optdmd_bad_data_demo = BOPDMD(svd_rank=1, num_trials=0)
+
+optdmd_bad_data_demo = BOPDMD(svd_rank=1, num_trials=0)
# Drop time steps 5 and 7 to demonstrate not needing uniformly spaced data.
index_to_keep = np.array([0, 1, 2, 3, 4, 6, 8, 9, 10, 11, 12, 13, 14, 15])
optdmd_bad_data_demo.fit(snapshots_1d[:, index_to_keep], time[index_to_keep])
# Demonstrate the robustness of the fit using the error in eigenvalues
-re = relative_error(
- optdmd_bad_data_demo.eigs,
- true_eigenvalues
-)
+re = relative_error(optdmd_bad_data_demo.eigs, true_eigenvalues)
-print('Eigenvalue error in bad data demonstration = {:.4f}'.format(re))
+print("Eigenvalue error in bad data demonstration = {:.4f}".format(re))
-
-
Eigenvalue error in bad data demonstration = 0.0004
The optDMD is quite robust in regards to unevenly spaced data.
-fig, ax = plt.subplots(1, 1)
+
+fig, ax = plt.subplots(1, 1)
+ax.scatter(
+ true_eigenvalues.real,
+ true_eigenvalues.imag,
+ 100,
+ label="True Eigenvalues",
+ color="k",
+)
+ax.scatter(
+ np.real(optdmd.eigs), np.imag(optdmd.eigs), label="optDMD", marker="s"
+)
ax.scatter(
- true_eigenvalues.real, true_eigenvalues.imag,
- 100, label='True Eigenvalues', color='k')
-ax.scatter(np.real(optdmd.eigs), np.imag(optdmd.eigs), label='optDMD', marker='s')
-ax.scatter(np.real(bopdmd.eigs), np.imag(bopdmd.eigs), label='BOP-DMD', marker='>')
-ax.scatter(np.real(dmd.eigs), np.imag(dmd.eigs), label='exact DMD')
-ax.set_xlabel('Real part')
-ax.set_ylabel('Imag part')
+ np.real(bopdmd.eigs), np.imag(bopdmd.eigs), label="BOP-DMD", marker=">"
+)
+ax.scatter(np.real(dmd.eigs), np.imag(dmd.eigs), label="exact DMD")
+ax.set_xlabel("Real part")
+ax.set_ylabel("Imag part")
ax.legend()
ax.set_ylim(-2, 2)
ax.set_xlim(-2, 2)
ax.grid()
-ax.axhline(0, color='0.6')
-ax.axvline(0, color='0.6')
-ax.set_title('Eigenvalues of DMD, optDMD, BOPDMD')
+ax.axhline(0, color="0.6")
+ax.axvline(0, color="0.6")
+ax.set_title("Eigenvalues of DMD, optDMD, BOPDMD")
plt.show()
-
-
The optDMD and BOP-DMD recover the true eigenvalues. It is unclear what is driving the difference in eigenvalues between the DMD methods.
One of the main advantages of the BOP-DMD is the ability to provide uncertainties on the fit. Here, the eigenvalue (and thus time dyanmic) uncertainties can be recovered.
-print('Uncertainty in the BOPDMD eigenvalues={:.02f}'.format(bopdmd.eigenvalues_std[0]))
+
+print(
+ "Uncertainty in the BOPDMD eigenvalues={:.02f}".format(
+ bopdmd.eigenvalues_std[0]
+ )
+)
# Compare to the eigenvalue error
-re = relative_error(
- bopdmd.eigs,
- true_eigenvalues
-)
-print('Error in the BOPDMD eigenvalues={:.03f}'.format(re))
+re = relative_error(bopdmd.eigs, true_eigenvalues)
+print("Error in the BOPDMD eigenvalues={:.03f}".format(re))
-
-
Uncertainty in the BOPDMD eigenvalues=0.01 + ++Uncertainty in the BOPDMD eigenvalues=0.00 Error in the BOPDMD eigenvalues=0.001
All DMD classes recover the time dynamics, but the DMD appears to have a phase offset. In contrast, the BOP-DMD and optDMD recover these dyanmics almost exactly. The time dynamics can be recovered directly by normalizing the dyanmics using the amplitudes.
plt.plot(time, dmd.dynamics.real.T / dmd.amplitudes.real, label='Exact DMD')
+
+plt.plot(time, dmd.dynamics.real.T / dmd.amplitudes.real, label="Exact DMD")
# We have to carefully format the markers since the BOP-DMD, optDMD,
# and true data are hard to distinguish otherwise.
-plt.plot(time, bopdmd.dynamics.real.T / bopdmd.amplitudes.real,
- marker='o', markersize=10, markerfacecolor='none', label='bopdmd')
-plt.plot(time, optdmd.dynamics.real.T / optdmd.amplitudes.real,
- marker='o', markerfacecolor='none', label='optdmd')
-plt.plot(time, np.exp(true_eigenvalues * time).real, 'k-.', label='True dynamics')
+plt.plot(
+ time,
+ bopdmd.dynamics.real.T / bopdmd.amplitudes.real,
+ marker="o",
+ markersize=10,
+ markerfacecolor="none",
+ label="bopdmd",
+)
+plt.plot(
+ time,
+ optdmd.dynamics.real.T / optdmd.amplitudes.real,
+ marker="o",
+ markerfacecolor="none",
+ label="optdmd",
+)
+plt.plot(
+ time, np.exp(true_eigenvalues * time).real, "k-.", label="True dynamics"
+)
plt.legend()
-plt.gca().set_xlabel('Time (-)')
-plt.gca().set_ylabel('Normalized Dynamics (-)')
+plt.gca().set_xlabel("Time (-)")
+plt.gca().set_ylabel("Normalized Dynamics (-)")
plt.show()
-
-
The average time dynamics of the entire system can also be recovered by averaging the spatial modes for each time step. This roughly corresponds to the two-dimensional integral from Tutorial 2. The forecasted period is also examined.
The optDMD is neglected here for visual clarity.
-Both the BOP-DMD and DMD recover the training and forecasted data with high fidelity, which can be quantified:
-re = relative_error(
+
+re = relative_error(
dmd_forecast.reconstructed_data.reshape(
x1grid.shape[0], x1grid.shape[1], len(time_forecast)
),
- data_forecast
+ data_forecast,
)
-print('Relative Error of the DMD forecast: {:0.2f}'.format(re))
+print("Relative Error of the DMD forecast: {:0.2f}".format(re))
-
-
Relative Error of the DMD forecast: 0.25
re = relative_error(
- forecast_mean.reshape(
- x1grid.shape[0], x1grid.shape[1], len(time_forecast)
- ),
- data_forecast
+
+re = relative_error(
+ forecast_mean.reshape(x1grid.shape[0], x1grid.shape[1], len(time_forecast)),
+ data_forecast,
)
-print('Relative Error of the BOP-DMD forecast: {:0.2f}'.format(re))
+print("Relative Error of the BOP-DMD forecast: {:0.2f}".format(re))
-
-
Relative Error of the BOP-DMD forecast: 0.23
The BOP-DMD just barely outperforms the DMD when evaluating a forecasted period. Note: the relative improvement depends on the level of noise and the DMD can even out perform the BOP-DMD in some cases.
-Spatial Modes¶
In most real-world cases we don't have the actual underlying modes to compare to. However, with the toy data we can see how well the spatial mode was recovered.
- +Spatial Modes¶
In most real-world cases we don't have the actual underlying modes to compare to. However, with the toy data we can see how well the spatial mode was recovered.
dmd_spatial_modes = dmd.modes.reshape(x1grid.shape).real * dmd.amplitudes.real
+
+dmd_spatial_modes = dmd.modes.reshape(x1grid.shape).real * dmd.amplitudes.real
fig, ax = plt.subplots(1, 1)
-c = ax.pcolormesh(dmd_spatial_modes - spatial_modes, vmin=-sigma, vmax=sigma, cmap='RdBu_r')
+c = ax.pcolormesh(
+ dmd_spatial_modes - spatial_modes, vmin=-sigma, vmax=sigma, cmap="RdBu_r"
+)
cbar = fig.colorbar(c)
-cbar.set_label('Error')
+cbar.set_label("Error")
ax.set_title(
- 'Error in Exact DMD w/ opt Spatial Modes\nRelative Error={:.2f}'.format(
+ "Error in Exact DMD w/ opt Spatial Modes\nRelative Error={:.2f}".format(
relative_error(dmd_spatial_modes, spatial_modes)
)
)
-ax.set_ylabel('Dim 2 (-)')
-ax.set_xlabel('Dim 1 (-)')
+ax.set_ylabel("Dim 2 (-)")
+ax.set_xlabel("Dim 1 (-)")
plt.show()
-
-
The DMD has some issues recovering the spatial modes, as the errors exhibit a spatial bias.
-bopdmd_spatial_modes = bopdmd.modes.reshape(x1grid.shape).real * bopdmd.amplitudes.real
+
+bopdmd_spatial_modes = (
+ bopdmd.modes.reshape(x1grid.shape).real * bopdmd.amplitudes.real
+)
fig, ax = plt.subplots(1, 1)
-c = ax.pcolormesh(bopdmd_spatial_modes - spatial_modes, vmin=-sigma, vmax=sigma, cmap='RdBu_r')
+c = ax.pcolormesh(
+ bopdmd_spatial_modes - spatial_modes, vmin=-sigma, vmax=sigma, cmap="RdBu_r"
+)
cbar = fig.colorbar(c)
-cbar.set_label('Error')
+cbar.set_label("Error")
ax.set_title(
- 'Error in BOP-DMD Spatial Modes\nRelative Error={:.2f}'.format(
+ "Error in BOP-DMD Spatial Modes\nRelative Error={:.2f}".format(
relative_error(bopdmd_spatial_modes, spatial_modes)
)
)
-ax.set_ylabel('Dim 2 (-)')
-ax.set_xlabel('Dim 1 (-)')
+ax.set_ylabel("Dim 2 (-)")
+ax.set_xlabel("Dim 1 (-)")
plt.show()
-
-
The BOP-DMD spatial mode error is effectively flat, with just random white noise corrupting the spatial modes. The relative error in the spatial modes is slightly smaller than for the DMD.
-Spatial Mode uncertainty¶
One of the major features of BOP-DMD is the ability to quantify uncertainty in the temporal and spatial modes. The below shows the uncertainty in the spatial modes.
- +Spatial Mode uncertainty¶
One of the major features of BOP-DMD is the ability to quantify uncertainty in the temporal and spatial modes. The below shows the uncertainty in the spatial modes.
fig, ax = plt.subplots(1, 1)
+
+fig, ax = plt.subplots(1, 1)
c = ax.pcolormesh(
reconstruction_variance.mean(axis=1).reshape(
(x1grid.shape[0], x1grid.shape[1])
)
)
cbar = fig.colorbar(c)
-cbar.set_label('Uncertainty (-)')
-ax.set_title('BOP-DMD Spatial Modes Uncertainty')
-ax.set_ylabel('Dim 2 (-)')
-ax.set_xlabel('Dim 1 (-)')
+cbar.set_label("Uncertainty (-)")
+ax.set_title("BOP-DMD Spatial Modes Uncertainty")
+ax.set_ylabel("Dim 2 (-)")
+ax.set_xlabel("Dim 1 (-)")
fig.tight_layout()
plt.show()
-
-
The uncertainty is largest in the center of the spatial domain (although the magnitude of the uncertainty is perhaps a bit unbelievable given how noisy the system is). However, it is worth nothing that the uncertainty is much smaller than the error, potentially revealing an underestimate of the true uncertainty in the system.
-Parting comments¶
The BOP-DMD and optDMD were designed for fitting DMDs to noisy data or data with unevenly spaced data points. But, there are a number of other methods for handling noisy data with DMDs. In this tutorial we examined the exact DMD using the tlsq_rank keyword.
Parting comments¶
The BOP-DMD and optDMD were designed for fitting DMDs to noisy data or data with unevenly spaced data points. But, there are a number of other methods for handling noisy data with DMDs. In this tutorial we examined the exact DMD using the tlsq_rank keyword.
How do other methods designed to handle noisy data (e.g., the forward-backward DMD) compare to the BOP-DMD and the exact DMD with the
tlsq_rankkeyword?What happens if you change the noise? Consider types of noise commonly present in real data sets such as noise that is not uniform in time or space or non-white noise.
Conservative systems
Out[3]:
-<pydmd.pidmd.PiDMD at 0x7f3acc943a30>
+<pydmd.pidmd.PiDMD at 0x7f3d94541e50>
<pydmd.pidmd.PiDMD at 0x7f3acc943a30>+
<pydmd.pidmd.PiDMD at 0x7f3d94541e50>
Part 2: Application to a Toy System
-
+
Exact DMD¶
DMD Reconstruction Error: 0.017055067185693867 -DMD Training Time: 0.015942096710205078 +DMD Training Time: 0.015372037887573242
Compressed DMD
-CDMD (Average) Reconstruction Error: 0.02826958851385462
-CDMD (Average) Training Time: 0.012632354497909538
+CDMD (Average) Reconstruction Error: 0.028828671536387802
+CDMD (Average) Training Time: 0.011527563333511357
CDMD (Average) Reconstruction Error: 0.02826958851385462 -CDMD (Average) Training Time: 0.012632354497909538 +CDMD (Average) Reconstruction Error: 0.028828671536387802 +CDMD (Average) Training Time: 0.011527563333511357
Randomized DMD: Varying Oversampli
-
+
Randomized DMD: Varying Power
-
+
Runtime Comparison
-
+
` is generated by Mermaid in place of `\n`, + * but _any_ "malformed" tag will break the SVG rendering entirely. + */ + const RE_VOID_ELEMENT = + /<\s*(area|base|br|col|embed|hr|img|input|link|meta|param|source|track|wbr)\s*([^>]*?)\s*>/gi; + + /** + * Ensure a void element is closed with a slash, preserving any attributes. + */ + function replaceVoidElement(match, tag, rest) { + rest = rest.trim(); + if (!rest.endsWith('/')) { + rest = `${rest} /`; + } + return `<${tag} ${rest}>`; + } + void Promise.all([...diagrams].map(renderOneMarmaid)); }); @@ -7729,7 +7763,7 @@
DMD cannot capture
-
+
+
+/home/runner/work/PyDMD/PyDMD/pydmd/lando.py:315: RuntimeWarning: invalid value encountered in sqrt
+ max(0, np.abs(np.sqrt(kxx - np.sum(s**2)))),
+/home/runner/work/PyDMD/PyDMD/pydmd/lando.py:237: UserWarning: The Cholesky factor is ill-conditioned. Consider increasing dict_tol or changing the kernel function.
+ warnings.warn(msg)
+
+
+
+
+
-LANDO Fitting Time: 6.318021535873413
+LANDO Fitting Time: 6.663822174072266
@@ -7929,7 +7973,7 @@
/home/runner/work/PyDMD/PyDMD/pydmd/lando.py:315: RuntimeWarning: invalid value encountered in sqrt + max(0, np.abs(np.sqrt(kxx - np.sum(s**2)))), +/home/runner/work/PyDMD/PyDMD/pydmd/lando.py:237: UserWarning: The Cholesky factor is ill-conditioned. Consider increasing dict_tol or changing the kernel function. + warnings.warn(msg) ++
LANDO Fitting Time: 6.318021535873413 +LANDO Fitting Time: 6.663822174072266
LANDO Training Error: 0.00038% -LANDO Dictionary Size: 20 +LANDO Dictionary Size: 22 Original Number of Samples: 10000
-
+
-
+
-
+
Example 2: Coupled Kuramoto Osc
-
+
Example 2: Coupled Kuramoto Osc
-Frequency Error: 1.76009%
-LANDO Training Error: 0.00145%
-LANDO Dictionary Size: 218
+Frequency Error: 1.7864600000000002%
+LANDO Training Error: 0.00259%
+LANDO Dictionary Size: 219
Original Number of Samples: 1000
@@ -8309,7 +8353,7 @@ Example 2: Coupled Kuramoto Osc
-
+
diff --git a/docs/source/_tutorials/tutorial19havok.html b/docs/source/_tutorials/tutorial19havok.html
index 8cbad51a6..2d1482ab8 100644
--- a/docs/source/_tutorials/tutorial19havok.html
+++ b/docs/source/_tutorials/tutorial19havok.html
@@ -7712,7 +7712,7 @@ Basic HAVOK application
Out[4]:
-<pydmd.havok.HAVOK at 0x7fbe35b75b20>
+<pydmd.havok.HAVOK at 0x7f92dc56e220>
Frequency Error: 1.76009% -LANDO Training Error: 0.00145% -LANDO Dictionary Size: 218 +Frequency Error: 1.7864600000000002% +LANDO Training Error: 0.00259% +LANDO Dictionary Size: 219 Original Number of Samples: 1000
Example 2: Coupled Kuramoto Osc
-
+
<pydmd.havok.HAVOK at 0x7fbe35b75b20>+
<pydmd.havok.HAVOK at 0x7f92dc56e220>
` is generated by Mermaid in place of `\n`, + * but _any_ "malformed" tag will break the SVG rendering entirely. + */ + const RE_VOID_ELEMENT = + /<\s*(area|base|br|col|embed|hr|img|input|link|meta|param|source|track|wbr)\s*([^>]*?)\s*>/gi; + + /** + * Ensure a void element is closed with a slash, preserving any attributes. + */ + function replaceVoidElement(match, tag, rest) { + rest = rest.trim(); + if (!rest.endsWith('/')) { + rest = `${rest} /`; + } + return `<${tag} ${rest}>`; + } + void Promise.all([...diagrams].map(renderOneMarmaid)); }); @@ -7652,7 +7686,7 @@
Tutorial 1: Dyn
-
+
Tutorial 1: Dyn
-
+
DMD with perfect dat
-
+
@@ -7767,13 +7801,13 @@ DMD with perfect dat
-
+
-
+
@@ -7853,26 +7887,26 @@ Steps 1 and 2: Using PyDMD on r
-
+
-Frequencies (imaginary component): [-0. +2.797j -0. -2.797j -0.001+2.3j -0.001-2.3j ]
+Frequencies (imaginary component): [-0.003+2.801j -0.003-2.801j -0.003+2.304j -0.003-2.304j]
-
+
-
+
DMD with perfect dat
-
+
-
+
Frequencies (imaginary component): [-0. +2.797j -0. -2.797j -0.001+2.3j -0.001-2.3j ] +Frequencies (imaginary component): [-0.003+2.801j -0.003-2.801j -0.003+2.304j -0.003-2.304j]
Steps 1 and 2: Using PyDMD on r
-
+
-
+
-
+
-Computed amplitudes: [2.951 2.951 2.553 2.553]
+Computed amplitudes: [2.954 2.954 2.558 2.558]
@@ -8036,26 +8070,26 @@ [Optional] Step 3: DMD with
-
+
-Frequencies (imaginary component): [ 0.+2.797j -0.-2.797j 0.+2.3j -0.-2.3j ]
+Frequencies (imaginary component): [ 0.+2.801j -0.-2.801j 0.+2.303j -0.-2.303j]
-
+
-
+
@@ -8118,7 +8152,7 @@
-
+
Computed amplitudes: [2.951 2.951 2.553 2.553] +Computed amplitudes: [2.954 2.954 2.558 2.558]
[Optional] Step 3: DMD with
-
+
-Frequencies (imaginary component): [ 0.+2.797j -0.-2.797j 0.+2.3j -0.-2.3j ]
+Frequencies (imaginary component): [ 0.+2.801j -0.-2.801j 0.+2.303j -0.-2.303j]
-
+
-
+
Frequencies (imaginary component): [ 0.+2.797j -0.-2.797j 0.+2.3j -0.-2.3j ] +Frequencies (imaginary component): [ 0.+2.801j -0.-2.801j 0.+2.303j -0.-2.303j]
-
+
This is
-
+
-Frequencies (imaginary component): [-0.891+2.694j -0.891-2.694j -2.464+1.105j -2.464-1.105j]
+Frequencies (imaginary component): [-0.79 +2.776j -0.79 -2.776j -1.328+0.j -5.962+0.j ]
-
+
-
+
Frequencies (imaginary component): [-0.891+2.694j -0.891-2.694j -2.464+1.105j -2.464-1.105j] +Frequencies (imaginary component): [-0.79 +2.776j -0.79 -2.776j -1.328+0.j -5.962+0.j ]