Introduction to Iterative Ensemble Smoother

The Iterative Ensemble Smoother (IES) is a recent addition to the PEST++ software. This working mode of PEST++ works significantly different than the classic gradient-based Gauss-Levenberg-Marquardt search method using in PEST / PEST++-GLM.

Most prominently, it is PEST-IES goal to not provide a single calibrated model as classic PEST and PEST-GLM do. Instead, the goal is to produce a multitude of model versions that each satisfy the calibration criterion with different parameter values. Such an ensemble is than considered a representative sample of possible parameter sets, and forecasts conducted using this ensemble can be considered a representative sample of possible forecasts.

This means that the both the identified parameter values and the corresponding forecasts are inherently associated with a predictive uncertainty range, hence allowing more robust decision making as the model uncertainty can be quantified.

PEST-IES creates the model ensemble in the following steps:

  • The ensemble is initialized by randomizing the parameter values of the model within the prior probability distribution. The size of the ensemble and the prior probability (=possible range of parameter values) are specified by the user. The ensemble also contains a so-called base-model, which has parameters equal to the prior values with no random deviation to the parameter value.

  • The observation values are also randomized within the assumed noise associated with the measurement. This feature can be deactivated by the user. No noise is added to the base realization.

  • The ensemble is then calibrated using a modified GLM algorithm. The objective of the calibration is to reduce the measurement objective function until the calibration target (RMS) is met, under the condition that the parameter values deviate as little as possible from the initial parameter values of the ensemble member.

  • If calibration is successful, the ensemble can be deployed for predictive forecasts under consideration of uncertainties.

Besides that this method has the advantage of considering parameter uncertainty in a Bayesian sense, there is a second massive advantage:

Given a sufficient size of the ensemble, the ensemble members will cover a wide range of the parameter space during the initial run and after each parameter upgrade step. Using the results of these simulations, PEST can calculate correlations between parameter and observation values. These can then act as an approximation of the parameter sensitivities (the Jacobian matrix), without the need for the numerical differentiation, which causes the majority of numerical cost in a classical GLM optimization.

As a consequence, even though multiple models are calibrated, an IES-based optimization commonly requires significantly fewer model runs (often by a factor of tens or hundreds) and therefore much less computation time compared to a traditional GLM-based iteration.

A second benefit arising from the approximation of the Jacobian is that it makes the calibration process a lot more robust against model instabilities than numerical differentiation. The noise created by an unstable model is smoothed (hence the name of the method) and will thus affect the calibration process to a much lesser degree.

Table of Contents