MODES Module

Contents

modespy.modes.SolverInstance

Container class used to hold all runtime data for a single instance of a specific ODE solver type operating on a single order.

modespy.modes.AnalyticDenseOutput

AnalyticDenseOutput provides a DenseOutput representation of the user-supplied analytic function.

modespy.modes.MODESDenseOutput

MODESDenseOutput provides interpolation for intermediate values between MODES timesteps.

modespy.modes.MODES

This class implements the generic multi-step, multi-order ODE solver.

Members

This module implements the MODES class which is drop-in compatible with SciPy function scipy.integrate.solve_ivp(). This is the main mechanism by which the parameterised multi-step solver is called. See examples below demonstrating the calling methods.

Differences to MATLAB Implementation

  • (Version 0.9.0) Dense output added by Eric J. Whitney.

  • (Version 0.9.0) Fallback implicit solver added by Eric J. Whitney.

  • (Version 0.9.0) The requirement for counting a certain number of previous steps (orderControlStarts) before allowing order control has been removed from the main loop. The purpose of this was to ensure that a higher order would have enough previous steps to be able to compute a new point. In this implementation this requirement is met by design and is therefore checked on the fly.

  • (Version 0.9.0) There is a subtle change in the how the polynomial values are unpacked for Implicit + 1 methods when k = 1. Refer to _coeffs_implicit_plus() for details.

  • (Version 0.9.0) The code implementating the original MATLAB method of setting up the predictor polynomial is quarantined, as it appears to be already covered by other code. Refer to predict() for details.

Examples

As a simple first example we solve the Lotka-Volterra problem and plot the solution. This is a non-stiff problem originating from biology. We will use the Adams-Bashforth method which does not required the Jacobian.

>>> from scipy.integrate import solve_ivp
>>> import matplotlib.pyplot as plt
>>> from modespy import MODES
>>> from modespy.std_problems import lotka_rhs
>>> sol = solve_ivp(fun=lotka_rhs, t_span=[0, 2], y0=[1, 1], method=MODES,
...                 modes_method='Adams-Bashforth')
>>> plt.figure()
>>> plt.plot(sol.t, sol.y[0,:], '-or', sol.t, sol.y[1,:], '-ob',
... markerfacecolor='none')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$y_0$, $y_1$')
>>> plt.title('LOTKA-VOLTERRA')
>>> plt.grid()

This next example shows how the solution can be started with a known analytic solution, which can then also be used to compute real errors. For interest we will also plot the dynamic solver order used during the solution.

>>> from math import exp, sqrt, cos, sin
>>> w_n, z = 3.0, 0.15  # Underdamped.
>>> x_0, u_0 = 1, 0
>>> w_d = w_n * sqrt(1 - z ** 2)
>>> def pendulum_eqns(t, y):
...     x = y[1]  # Free pendulum split into two 1st order ODEs.
...     x_dot = -2 * z * w_n * y[1] - w_n ** 2 * y[0]
...     return x, x_dot
>>> def pendulum_solution(t):  # Analytic solution.
...     x = exp(-z * w_n * t) * (
...             x_0 * cos(w_d * t) + (u_0 + z * w_n * x_0) / w_d * sin(w_d *
...                                                                    t))
...     x_dot = (exp(-t * w_n * z) *
...              (cos(t * w_d) * (u_0 + w_n * x_0 * z) - w_d * x_0 *
...               sin(t * w_d)) - w_n * z * exp(-t * w_n * z) *
...               (sin(t * w_d) * (u_0 + w_n * x_0 * z) / w_d + x_0 *
...                cos(t * w_d)))
...     return x, x_dot
>>> modes_stats = {}
>>> sol = solve_ivp(fun=pendulum_eqns, t_span=[0, 10], y0=[x_0, u_0],
...                 method=MODES,
...                 modes_stats=modes_stats,  # Gather statistics.
...                 modes_p=(3, 7),  # Force higher order for analytic startup.
...                 modes_config={'analytic_fn': pendulum_solution,
...                               'analytic_start': True,  # Known statup pts.
...                               'verbose': True})
>>> x_analytic = [y[0] for y in modes_stats['y_analytic']]  # Extract just `x`.
>>> plt.figure()
>>> plt.plot(sol.t, sol.y[0, :], '-ob', markerfacecolor='none',
...          label='COMPUTED')
>>> plt.plot(modes_stats['t'], x_analytic, '--sr', markerfacecolor='none',
...          label='ANALYTIC')
>>> plt.legend(loc='upper right')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$y$')
>>> plt.title('PENDULUM')
>>> plt.grid()
>>> plt.figure()
>>> plt.plot(modes_stats['t'], modes_stats['err_norm_analytic'], '-ob',
...          markerfacecolor='none')
>>> plt.ticklabel_format(style='sci', scilimits=(-3, 4), axis='y')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$||y - y_{EXACT}||$')
>>> plt.title('ERROR 2-NORM')
>>> plt.grid()
>>> plt.figure()
>>> plt.plot(modes_stats['t'], modes_stats['p'], 'k.-')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$p$')
>>> plt.yticks(np.arange(1, max(modes_stats['p']) + 1, 1))
>>> plt.title('SOLVER ORDER')
>>> plt.grid()

As a final more complex example we solve the High Irradiance RESponse which is a stiff problem originating from plant physiology. A number of the tunable parameters are adjusted to show how this can be done.

>>> from modespy.std_problems import hires_jac, hires_rhs
>>> modes_stats = {}
>>> sol = solve_ivp(fun=hires_rhs, 
...                 t_span=[0, 321.8122],
...                 y0=[1, 0, 0, 0, 0, 0, 0, 0.0057],
...                 t_eval=np.linspace(0.0, 321.8122, 200),  # Dense `t` vals.
...                 tol=1e-3,  # Nominated tolerance.
...                 method=MODES,
...                 jac=hires_jac,  # Jacobian function supplied.
...                 modes_method='dcBDF',
...                 modes_p=(2, 6),  # Specified order `p` range.
...                 modes_start_idx=2,  # Starting solver (p = 4 in this case).
...                 modes_filter=('H312b', 10),  # Non-default filter `b`.
...                 modes_stats=modes_stats,  # Gather stats on errors, `p`, etc.
...                 modes_config={'impl_solver': 'hybr',
...                               'impl_backup': None,
...                               'impl_its': 10,
...                               'verbose': True}
...                 )
MODES Solution -> p = [2, 3, 4, 5, 6], Method = dcBDF, Filter = ('H312b', 10)
... 18 RHS evaluations used during startup.
...
... 1207 RHS Evaluations: t = 2.4585, h = 0.010501, p = 4 (34 Rejected Steps)
... 1575 RHS Evaluations: t = 3.2594, h = 0.16758, p = 2 (34 Rejected Steps)
... 2561 RHS Evaluations: t = 117.98, h = 2.904, p = 2 (34 Rejected Steps)
>>> plt.figure()
>>> for i, y in enumerate(sol.y):
...     plt.plot(sol.t, y, '.-')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$y$')
>>> plt.title('HIRES')
>>> plt.margins(x=0)
>>> plt.grid()

Here we plot some statistics. Note that when plotting statistics we use modes_stats['t'] values (and not sol.t). These values align with the data and represent actual computation which can be used to observe the variable solver steps. They may not align with the normal output sol data which may have a different length.

>>> plt.figure()
>>> h = [t2 - t1 for t1, t2 in zip(modes_stats['t'], modes_stats['t'][1:])]
>>> plt.plot(modes_stats['t'][:-1], h, 'k.-')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$h$')
>>> plt.title('STEP SIZE VARIATION')
>>> plt.yscale('log')
>>> plt.margins(x=0)
>>> plt.grid(which='both', axis='both')
>>> plt.figure()
>>> plt.plot(modes_stats['t'], modes_stats['err_norm'], 'k.-')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$||E||$')
>>> >>> plt.title('ERROR VARIATION')
>>> plt.yscale('log')
>>> plt.margins(x=0)
>>> plt.grid()
>>> plt.figure()
>>> plt.plot(modes_stats['t'], modes_stats['p'], 'k.-')
>>> plt.xlabel('$t$')
>>> plt.ylabel('$p$')
>>> plt.title('SOLVER ORDER')
>>> plt.yticks(np.arange(1, max(modes_stats['p']) + 1, 1))
>>> plt.margins(x=0)
>>> plt.grid()
>>> plt.figure()
>>> plt.plot(modes_stats['t'], modes_stats['rej_steps'], 'k.-')
>>> plt.xlabel('$t$')
>>> plt.ylabel('REJECTED STEPS')
>>> plt.title('ACCUMULATION OF REJECTED STEPS')
>>> plt.margins(x=0)
>>> plt.grid()
>>> plt.show()

Future

  • Addition of a timer call within _step_impl() for each order and running averaging would allow work factors to be automatically computed.

class modespy.modes.AnalyticDenseOutput(t_old: float, t: float, fun: Callable[[float], float | Sequence[float] | Type[ndarray]])

Bases: DenseOutput

AnalyticDenseOutput provides a DenseOutput representation of the user-supplied analytic function. This is used when interpolating dense points in the very early startup phase of the solution where an analytic solution is available.

__init__(t_old: float, t: float, fun: Callable[[float], float | Sequence[float] | Type[ndarray]])
_call_impl(t: float)
class modespy.modes.MODES(fun: Callable[[float, float | Sequence[float] | Type[ndarray], Any | None], float | Sequence[float] | Type[ndarray]], t0: float, y0: float | Sequence[float] | Type[ndarray], t_bound: float, tol: float = 0.0001, jac: Callable[[float, float | Sequence[float] | Type[ndarray], Any | None], float | Sequence[float] | Type[ndarray]] | ndarray | None = None, vectorized=False, first_step: float | str = 'Deterministic', modes_p: Sequence[int] | None = None, modes_method: str | Sequence[str] = 'Adams-Moulton', modes_filter: str | Tuple[str, float] | None | Sequence[str | Tuple[str, float] | None] = 'PI3333', modes_start_idx: int | None = None, modes_config: Dict = None, modes_stats: dict = None, **extraneous)

Bases: OdeSolver

This class implements the generic multi-step, multi-order ODE solver. It is derived from SciPy OdeSolver class so that it can be called directly by scipy.integrate solve_ivp().

Implementation Details

  • The number of Jacobian LU-factorisations self.nlu are not tracked as is done for other ODE methods, because this depends on the choice of implicit solver which is variable.

  • Dense output uses the same interpolating polynomial developed for each time step as the solution progresses (i.e. the order and timestep will vary). Therefore the precision of dense output is the same as the solver precision at each point in the solution.

Note

Because solve_ivp() calls _step_impl() on the solver before _dense_output_impl() is ever polled, there is no specific dense output interpolation available for any of the intermediate points computed during startup. This is similar to other SciPy multistep methods, but could be adapted if desired.

  • Use of an analytic function for startup point generation is considered a research tool only. At present it appears to be unreliable where higher starting orders are used i.e. \(p_{init} \leq 3\)

__init__(fun: Callable[[float, float | Sequence[float] | Type[ndarray], Any | None], float | Sequence[float] | Type[ndarray]], t0: float, y0: float | Sequence[float] | Type[ndarray], t_bound: float, tol: float = 0.0001, jac: Callable[[float, float | Sequence[float] | Type[ndarray], Any | None], float | Sequence[float] | Type[ndarray]] | ndarray | None = None, vectorized=False, first_step: float | str = 'Deterministic', modes_p: Sequence[int] | None = None, modes_method: str | Sequence[str] = 'Adams-Moulton', modes_filter: str | Tuple[str, float] | None | Sequence[str | Tuple[str, float] | None] = 'PI3333', modes_start_idx: int | None = None, modes_config: Dict = None, modes_stats: dict = None, **extraneous)

Sets up the MODES solution and is called by solve_ivp().

Parameters:
  • fun (callable) – Right-hand side of the system supplied by solve_ivp(). The calling signature is fun(t, y) and it can be vectorized; refer to SciPy documentation for details.

  • t0 (float) – Initial time.

  • y0 (array_like, shape (n,)) – Initial state. The size of this array sets the problem dimension.

  • t_bound (float) – Boundary time - representing the end of the integration. The integration direction is determined from this value and t0.

  • tol (float, optional) – Desired solution tolerance (default = 1e-4). Implementation and scaling of errors is complex; refer to source code and MODES papers for more details. See also s_rel, s_abs and w_mat in modes_config.

  • jac (None, array_like or callable, optional) –

    Jacobian matrix of the right-hand side of the system with respect to y. A Jacobian is only needed if an Implicit or Implicit + 1 solver is part of the run. The Jacobian matrix has shape (n, n) and its element (i, j) is equal to \(df_i / dy_j\). There are three ways to define the Jacobian:

    • If array_like, the Jacobian is assumed to be constant.

    • If callable, the Jacobian is assumed to depend on both t and y; it will be called as jac(t, y) when required.

    • If None (default), the Jacobian will be approximated by finite differences when required.

  • vectorized (bool, optional) – Whether fun is implemented in a vectorized fashion. Default is False.

  • first_step (float, 'Deterministic' or 'Random', optional) –

    • If float, this sets the initial step size.

    • If ‘Deterministic’ (default) this sets the initial step size using a deterministic algorithm; refer to startup() for details.

    • If ‘Random’ (default) this sets the initial step size using a randomised algorithm; refer to startup() for details.

  • modes_p (array_like, optional) –

    This specifies the orders p to be used by the solvers. The type of argument supplied depends on the value of modes_method:

    • If modes_method is a string, all solvers and filters will share a common type and orders will span a prescribed range. In this case modes_p must have two elements representing this order range (inclusive), e.g. modes_p=(2, 6). The default value equals p_default for the given method.

    • If modes_method is an array of strings, then each solver must have its own prescribed order. In this case modes_p must have the same number of elements as modes_method and modes_filter, e.g. modes_p=(4, 5, 7, 8).

  • modes_method (string or array_like[string], optional) –

    This specifies the kind of solver to use:

    • If modes_method is a string, all solvers and filters will have the same type (default = ‘Adams-Moulton’). See methods.py for available types.

    • If modes_method is an array of strings, each solver type must be specified individually.

  • modes_filter (FiltArg or array_like[FiltArg], optional) –

    This specifies the kind of filter to use:

    • If modes_method is a string, all solvers will use the same filter (default = PI3333). Supply either a single string representing a default filter, or a tuple of (‘filter name’, b) if a bespoke factor is required. See filters.py for available types.

    • If modes_method is an array of strings, a separate filter must be defined for each solver. Supply a string or tuple for each as described above.

  • modes_start_idx (int, optional) – Specify which solver to commence with by index (not by order) i.e. if modes_p=(2, 6) then solver index 0 sets p = 2, index 1 sets p = 3, and so on. The default is the first order (typically lowest).

  • modes_config (dict[string, Any], optional) –

    Minor options for fine-tuning and research are passed by including them in this optional dict (unknown entries are ignored). Available options are:

    ’analytic_fn’callable

    A function representing a known analytic solution to the problem, with a calling signature of y = analytic_fn(t) (default = None).

    ’analytic_start’bool

    If True, use the analytic function to generate startup points instead of solving for them using a separate one-step method (default = False).

    ’impl_its’int

    Number of iterations for the primary implicit solver to acheive convergence (default = 12).

    ’impl_solver’string

    Primary solver to use for converging implicit values (default = ‘newton_const’). This can be ‘newton_const’ or any other valid argument accepted by scipy.optimize.root(), e.g. ‘hybr’, ‘lm’, ‘broyden1’, etc.

    ’impl_backup’string

    Fallback solver to use if the primary implicit solver fails. This can be None, ‘newton_const’ or any other valid argument accepted by scipy.optimize.root() (as above).

    Note

    The backup solver always numerically estimates its own Jacobian. This is done in case the user supplied Jacobian calculation itself is unstable / poorly conditioned.

    ’r_min’float

    Largest legal reduction in step size allowed (default = 0.8).

    ’r_max’float

    Largest legal enlargement in step size allowed (default = 1.2).

    ’s_rel’float

    Factor to apply for relative errors (default = 0).

    ’s_abs’float

    Factor to apply for absolute errors (default = 1).

    ’startup_method’string

    Type of solver to use when generating startup points (default = ‘RK45’). Currently supported values are the Runge-Kutta solvers supported by scipy.integrate.solve_ivp(): ‘RK23’, ‘RK45’ or ‘DOP853’.

    ’unit_errors’bool

    Apply unit scaling to errors (default = False).

    ’verbose’bool

    Print output to show run progress (default = False).

    ’w_mat’ndarray, shape (ndim, ndim)

    Scaling matrix for errors. The default is a unit matrix i.e. w_mat = np.eye(ndim, ndim).

  • modes_stats (dict[str, array_like], optional) –

    An empty dictionary may be supplied which will be overwritten to allow detailed statistics to be returned. It will be returned with these fields: ‘t’, ‘err_norm’, ‘h’, ‘p’, ‘rej_steps’. If an analytic function is supplied it will also include ‘y_analytic’ and ‘err_norm_analytic’.

    Note

    The number of points in these arrays may not equal the number of points returned by the main method, as they will also include startup points.

_coeffs_explicit(theta: ndarray, trim: int = None) ndarray | None

Compute the polynomial coefficients for an explicit k-step methods of order k.

At least len(thetas) + 1 past values from self.t. self.y and self.y_dot are required for solution.

Parameters:
  • theta (ndarray, shape(k-1,)) – \(\theta\) values defining the method.

  • trim (int, optional) – If non-zero then this value is used as the index of the last historic value to use. E.G. Using trim = -1 causes the last value to be omitted.

Returns:

x – Returns the polynomial coefficients found by solving the linear system. If insufficient data was available to find a solution it returns None.

Return type:

ndarray or None

_coeffs_implicit(thetas: ndarray, coeffs_start: ndarray, t_new: float) ndarray | None

Compute the polynomial coefficients for an implicit k-step methods of order k. Operation as per _coeffs_explicit(), except where noted.

Parameters:
  • coeffs_start (ndarray, shape(k+1, n)) – A matrix containing an estimate starting point for the implicit solution. Typically the coefficients for the previous polynomials are used.

  • t_new (float) – The new time value.

Returns:

x – Returns the polynomial coefficients found by solving the non-linear system. If insufficient data was available to find a solution it returns None.

Return type:

ndarray or None

_coeffs_implicit_plus(thetas: ndarray, coeffs_start: ndarray, t_new: float, trim: int = None) ndarray | None

Compute the polynomial coefficients for an implicit k-step methods of order k + 1. Operation as per _coeffs_implicit(), except where noted.

Differences to MATLAB Implementation

  • There is a subtle change to flip() when unpacking unknowns.

Parameters:
  • coeffs_start (ndarray, shape(k+2, n)) – Similar to coeffs_implicit().

  • trim (int) – As per _coeffs_explicit().

_dense_output_impl() MODESDenseOutput
_record_stats()
_solve_implicit_system(resid_fn, w, jac_fn)

Solves a given implicit system using a primary solver or fallback solver if the primary solver fails (where requested).

Parameters:
  • resid_fn (callable) – Function returning the RHS residual of the matrix system.

  • w (ndarray) – Solution array.

  • jac_fn (callable) – Jacobian function.

Returns:

x – Solution to the system.

Return type:

ndarray

Raises:

RuntimeError – If a primary (or fallback) solution could not be computed.

_step_impl()

Called automatically by solve_ivp(), this performs one step of the ODE solution by doing the following for each order / solver currently running:

  • Generates a predictor and corrector point where possible.

  • Compute the error and reject / accept step.

  • Update the current running solvers (normally just different p values).

Note

For the first few startup steps, the results are drawn from pre-computed startup points. This gives the solver the appearance of a one-step method to solve_ivp() which is designed with this in mind.

auto_stepsize(random_perturb: bool = False, rms_norm: bool = False) float

An auto-scaling procedure for computing an initial stepsize for ODE IVP linear multistep solvers. This is called from __init__ if required during setup and the stepsize is based on the starting order, time direction and last y point, which must already be defined (these are typically just the single starting values).

Parameters:
  • random_perturb (bool, optional) – If False (default) the initial stepsize is computed using a deterministic algorithm. Otherwise, a small random perturbation is used.

  • rms_norm (bool, optional) – If True, take the sqrt() of the norm.

Returns:

h – Computed stepsize.

Return type:

float

correct(sol_idx: int) ndarray | None

Generates a corrector polynomial for the solver at index sol_idx and solution point (where possible) for the next timepoint.

property h

Returns the stepsize h associated with the current order.

property nsols

Returns the total number of separate solvers / orders used.

property p: int

Returns the current order p.

predict(sol_idx: int) ndarray | None

Generates a predictor polynomial for the solver at index sol_idx and prediction point (where possible), for the next timepoint.

Note

Original MATLAB implementation includes a tril() which has been omitted here. Refer to source code for more details.

set_running_solvers()

This is the method that performs order control. It is called at the end of __init__() and again after each successful step in _step_impl(). It performs two tasks:

  1. Chooses Current Order: Decides which solver (i.e. order / method) is the ‘best’ to proceed with for the next step. self.sol_idx is set to the index of the ‘best’ solver.

  2. Sets Running Orders: Decides which solvers to advance together for the next step. The flags in self.solvers[…].running are set either True or False.

This function can be overridden to use different advancement / order control strategies if desired.

Note

This default implementation uses the same method for order control as the original MATLAB code, which is to advance the solver on each side of the current solver. The current solver is updated based on an internal state delta_p that is continuously updated by comparing the efficiencies of the solvers running on either side. When the order changes delta_p is reset to zero.

Differences to MATLAB Implementation

  • The original MATLAB code monitors orders above and below by running an inaccessible / fictitous order on each side; this is not done here.

  • If the current solver / order is the top or bottom, del_p_hi / _lo is set to 0 giving no preference to that direction. Also if we assume a fictitious higher sigma = 1 then +/- dp will be zero because one of the terms will be zero.

  • From the preprint TOMS paper, instead of Eqn 23 the technique of computing p +/- 1/2 has been replaced with the average of the two adjacent p values. This allows us to use adjacent p values that are not just +/- 1 steps.

startup()

Generate the required starting points for the multi-step method at of the required order. This is done by adding new solution points until k points are present (acknowledging any stored points). This is done using either an explicit Runge-Kutta style solver or a known analytic solution at h intervals.

Note

This method is essentially a deferred section of __init___ that sets up the data history and starting points. It is separated for the specific case where multiple solvers may be setup that share a common t, y, y’ history.

By the time we reach this point, there must be at least one t, y, y’ value present in the history.

property t
vprint(msg)
property y
class modespy.modes.MODESDenseOutput(t_old: float, t: float, sol_type: SolverType, coeffs: ndarray)

Bases: DenseOutput

MODESDenseOutput provides interpolation for intermediate values between MODES timesteps. This is done by storing the corrector polynomial of the main running solver / order After each successful step, along with the solver type and the last time step (t_oldt).

__init__(t_old: float, t: float, sol_type: SolverType, coeffs: ndarray)
_call_impl(t: float)
class modespy.modes.SolverInstance(p: int, method: Multistep, filter: ndarray, q_len: int, h: deque, r_ctrl: deque, poly_pred: ndarray | None, poly_corr: ndarray | None, running: bool)

Bases: object

Container class used to hold all runtime data for a single instance of a specific ODE solver type operating on a single order.

__init__(p: int, method: Multistep, filter: ndarray, q_len: int, h: deque, r_ctrl: deque, poly_pred: ndarray | None, poly_corr: ndarray | None, running: bool) None
filter: ndarray
h: deque
method: Multistep
p: int
poly_corr: ndarray | None
poly_pred: ndarray | None
q_len: int
r_ctrl: deque
running: bool
property theta